00:00:00.001 Started by upstream project "spdk-dpdk-per-patch" build number 229 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.082 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.083 The recommended git tool is: git 00:00:00.083 using credential 00000000-0000-0000-0000-000000000002 00:00:00.085 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.129 Fetching changes from the remote Git repository 00:00:00.140 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.193 Using shallow fetch with depth 1 00:00:00.193 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.193 > git --version # timeout=10 00:00:00.217 > git --version # 'git version 2.39.2' 00:00:00.217 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.218 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.218 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.187 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.198 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.210 Checking out Revision 974e6abc19174775f0f1ea53bba692f31ffb01a8 (FETCH_HEAD) 00:00:06.210 > git config core.sparsecheckout # timeout=10 00:00:06.219 > git read-tree -mu HEAD # timeout=10 00:00:06.234 > git checkout -f 974e6abc19174775f0f1ea53bba692f31ffb01a8 # timeout=5 00:00:06.253 Commit message: "jenkins/config: change SM0 ip due to lab relocation" 00:00:06.254 > git rev-list --no-walk 974e6abc19174775f0f1ea53bba692f31ffb01a8 # timeout=10 00:00:06.358 [Pipeline] Start of Pipeline 00:00:06.369 [Pipeline] library 00:00:06.370 Loading library shm_lib@master 00:00:06.371 Library shm_lib@master is cached. Copying from home. 00:00:06.387 [Pipeline] node 00:00:06.402 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.403 [Pipeline] { 00:00:06.416 [Pipeline] catchError 00:00:06.418 [Pipeline] { 00:00:06.431 [Pipeline] wrap 00:00:06.440 [Pipeline] { 00:00:06.446 [Pipeline] stage 00:00:06.447 [Pipeline] { (Prologue) 00:00:06.609 [Pipeline] sh 00:00:06.890 + logger -p user.info -t JENKINS-CI 00:00:06.908 [Pipeline] echo 00:00:06.909 Node: GP8 00:00:06.917 [Pipeline] sh 00:00:07.221 [Pipeline] setCustomBuildProperty 00:00:07.237 [Pipeline] echo 00:00:07.238 Cleanup processes 00:00:07.243 [Pipeline] sh 00:00:07.555 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.555 2396163 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.569 [Pipeline] sh 00:00:07.855 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.855 ++ grep -v 'sudo pgrep' 00:00:07.855 ++ awk '{print $1}' 00:00:07.855 + sudo kill -9 00:00:07.855 + true 00:00:07.871 [Pipeline] cleanWs 00:00:08.046 [WS-CLEANUP] Deleting project workspace... 00:00:08.046 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.053 [WS-CLEANUP] done 00:00:08.057 [Pipeline] setCustomBuildProperty 00:00:08.067 [Pipeline] sh 00:00:08.353 + sudo git config --global --replace-all safe.directory '*' 00:00:08.420 [Pipeline] nodesByLabel 00:00:08.421 Found a total of 1 nodes with the 'sorcerer' label 00:00:08.431 [Pipeline] httpRequest 00:00:08.435 HttpMethod: GET 00:00:08.436 URL: http://10.211.164.101/packages/jbp_974e6abc19174775f0f1ea53bba692f31ffb01a8.tar.gz 00:00:08.440 Sending request to url: http://10.211.164.101/packages/jbp_974e6abc19174775f0f1ea53bba692f31ffb01a8.tar.gz 00:00:08.456 Response Code: HTTP/1.1 200 OK 00:00:08.457 Success: Status code 200 is in the accepted range: 200,404 00:00:08.457 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_974e6abc19174775f0f1ea53bba692f31ffb01a8.tar.gz 00:00:11.170 [Pipeline] sh 00:00:11.452 + tar --no-same-owner -xf jbp_974e6abc19174775f0f1ea53bba692f31ffb01a8.tar.gz 00:00:11.471 [Pipeline] httpRequest 00:00:11.476 HttpMethod: GET 00:00:11.476 URL: http://10.211.164.101/packages/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:00:11.478 Sending request to url: http://10.211.164.101/packages/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:00:11.493 Response Code: HTTP/1.1 200 OK 00:00:11.493 Success: Status code 200 is in the accepted range: 200,404 00:00:11.494 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:00:33.649 [Pipeline] sh 00:00:33.945 + tar --no-same-owner -xf spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:00:36.496 [Pipeline] sh 00:00:36.781 + git -C spdk log --oneline -n5 00:00:36.781 65b4e17c6 uuid: clarify spdk_uuid_generate_sha1() return code 00:00:36.781 5d5e4d333 nvmf/rpc: Fail listener add with different secure channel 00:00:36.781 54944c1d1 event: don't NOTICELOG when no RPC server started 00:00:36.781 460a2e391 lib/init: do not fail if missing RPC's subsystem in JSON config doesn't exist in app 00:00:36.781 5dc808124 init: add spdk_subsystem_exists() 00:00:36.794 [Pipeline] sh 00:00:37.078 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/89/22689/2 00:00:38.018 From https://review.spdk.io/gerrit/spdk/dpdk 00:00:38.018 * branch refs/changes/89/22689/2 -> FETCH_HEAD 00:00:38.031 [Pipeline] sh 00:00:38.314 + git -C spdk/dpdk checkout FETCH_HEAD 00:00:39.249 Previous HEAD position was afe4186365 pmdinfogen: avoid empty string in ELFSymbol() 00:00:39.249 HEAD is now at d5497a26cb isal: compile compress_isal PMD without system-wide libisal 00:00:39.260 [Pipeline] } 00:00:39.281 [Pipeline] // stage 00:00:39.289 [Pipeline] stage 00:00:39.292 [Pipeline] { (Prepare) 00:00:39.310 [Pipeline] writeFile 00:00:39.325 [Pipeline] sh 00:00:39.603 + logger -p user.info -t JENKINS-CI 00:00:39.615 [Pipeline] sh 00:00:39.900 + logger -p user.info -t JENKINS-CI 00:00:39.913 [Pipeline] sh 00:00:40.201 + cat autorun-spdk.conf 00:00:40.201 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.201 SPDK_TEST_NVMF=1 00:00:40.201 SPDK_TEST_NVME_CLI=1 00:00:40.201 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.201 SPDK_TEST_NVMF_NICS=e810 00:00:40.201 SPDK_TEST_VFIOUSER=1 00:00:40.201 SPDK_RUN_UBSAN=1 00:00:40.201 NET_TYPE=phy 00:00:40.209 RUN_NIGHTLY= 00:00:40.214 [Pipeline] readFile 00:00:40.241 [Pipeline] withEnv 00:00:40.244 [Pipeline] { 00:00:40.260 [Pipeline] sh 00:00:40.545 + set -ex 00:00:40.545 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:40.545 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:40.545 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.545 ++ SPDK_TEST_NVMF=1 00:00:40.545 ++ SPDK_TEST_NVME_CLI=1 00:00:40.545 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.545 ++ SPDK_TEST_NVMF_NICS=e810 00:00:40.545 ++ SPDK_TEST_VFIOUSER=1 00:00:40.545 ++ SPDK_RUN_UBSAN=1 00:00:40.545 ++ NET_TYPE=phy 00:00:40.545 ++ RUN_NIGHTLY= 00:00:40.545 + case $SPDK_TEST_NVMF_NICS in 00:00:40.545 + DRIVERS=ice 00:00:40.545 + [[ tcp == \r\d\m\a ]] 00:00:40.545 + [[ -n ice ]] 00:00:40.545 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:40.545 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:40.545 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:40.545 rmmod: ERROR: Module irdma is not currently loaded 00:00:40.545 rmmod: ERROR: Module i40iw is not currently loaded 00:00:40.545 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:40.545 + true 00:00:40.545 + for D in $DRIVERS 00:00:40.545 + sudo modprobe ice 00:00:40.545 + exit 0 00:00:40.553 [Pipeline] } 00:00:40.563 [Pipeline] // withEnv 00:00:40.567 [Pipeline] } 00:00:40.578 [Pipeline] // stage 00:00:40.585 [Pipeline] catchError 00:00:40.586 [Pipeline] { 00:00:40.600 [Pipeline] timeout 00:00:40.600 Timeout set to expire in 40 min 00:00:40.602 [Pipeline] { 00:00:40.615 [Pipeline] stage 00:00:40.617 [Pipeline] { (Tests) 00:00:40.630 [Pipeline] sh 00:00:40.929 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.929 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.929 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.929 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:40.929 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:40.929 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:40.929 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:40.929 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:40.929 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:40.929 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:40.929 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.929 + source /etc/os-release 00:00:40.929 ++ NAME='Fedora Linux' 00:00:40.929 ++ VERSION='38 (Cloud Edition)' 00:00:40.929 ++ ID=fedora 00:00:40.929 ++ VERSION_ID=38 00:00:40.929 ++ VERSION_CODENAME= 00:00:40.929 ++ PLATFORM_ID=platform:f38 00:00:40.929 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:40.929 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:40.929 ++ LOGO=fedora-logo-icon 00:00:40.929 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:40.929 ++ HOME_URL=https://fedoraproject.org/ 00:00:40.929 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:40.929 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:40.929 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:40.929 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:40.929 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:40.929 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:40.929 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:40.929 ++ SUPPORT_END=2024-05-14 00:00:40.929 ++ VARIANT='Cloud Edition' 00:00:40.929 ++ VARIANT_ID=cloud 00:00:40.929 + uname -a 00:00:40.929 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:40.929 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:41.867 Hugepages 00:00:41.867 node hugesize free / total 00:00:41.867 node0 1048576kB 0 / 0 00:00:41.867 node0 2048kB 0 / 0 00:00:41.867 node1 1048576kB 0 / 0 00:00:41.867 node1 2048kB 0 / 0 00:00:41.867 00:00:41.867 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:41.867 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:41.867 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:41.867 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:41.867 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:41.867 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:41.867 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:41.867 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:41.867 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:41.867 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:41.867 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:41.867 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:41.867 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:41.867 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:41.867 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:41.867 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:41.867 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:42.126 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:42.126 + rm -f /tmp/spdk-ld-path 00:00:42.126 + source autorun-spdk.conf 00:00:42.126 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.126 ++ SPDK_TEST_NVMF=1 00:00:42.126 ++ SPDK_TEST_NVME_CLI=1 00:00:42.126 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.126 ++ SPDK_TEST_NVMF_NICS=e810 00:00:42.126 ++ SPDK_TEST_VFIOUSER=1 00:00:42.126 ++ SPDK_RUN_UBSAN=1 00:00:42.126 ++ NET_TYPE=phy 00:00:42.126 ++ RUN_NIGHTLY= 00:00:42.126 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:42.126 + [[ -n '' ]] 00:00:42.126 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:42.126 + for M in /var/spdk/build-*-manifest.txt 00:00:42.126 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:42.126 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:42.126 + for M in /var/spdk/build-*-manifest.txt 00:00:42.126 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:42.126 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:42.126 ++ uname 00:00:42.126 + [[ Linux == \L\i\n\u\x ]] 00:00:42.126 + sudo dmesg -T 00:00:42.126 + sudo dmesg --clear 00:00:42.126 + dmesg_pid=2396888 00:00:42.126 + [[ Fedora Linux == FreeBSD ]] 00:00:42.126 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:42.126 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:42.126 + sudo dmesg -Tw 00:00:42.126 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:42.126 + [[ -x /usr/src/fio-static/fio ]] 00:00:42.126 + export FIO_BIN=/usr/src/fio-static/fio 00:00:42.126 + FIO_BIN=/usr/src/fio-static/fio 00:00:42.126 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:42.126 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:42.126 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:42.126 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:42.126 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:42.126 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:42.126 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:42.126 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:42.126 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:42.126 Test configuration: 00:00:42.126 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.126 SPDK_TEST_NVMF=1 00:00:42.126 SPDK_TEST_NVME_CLI=1 00:00:42.126 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.126 SPDK_TEST_NVMF_NICS=e810 00:00:42.126 SPDK_TEST_VFIOUSER=1 00:00:42.126 SPDK_RUN_UBSAN=1 00:00:42.126 NET_TYPE=phy 00:00:42.126 RUN_NIGHTLY= 13:29:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:42.126 13:29:44 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:42.126 13:29:44 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:42.126 13:29:44 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:42.126 13:29:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.126 13:29:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.126 13:29:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.126 13:29:44 -- paths/export.sh@5 -- $ export PATH 00:00:42.126 13:29:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.126 13:29:44 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:42.126 13:29:44 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:42.126 13:29:44 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713439784.XXXXXX 00:00:42.126 13:29:44 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713439784.1Tu07m 00:00:42.126 13:29:44 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:42.126 13:29:44 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:42.126 13:29:44 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:42.126 13:29:44 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:42.126 13:29:44 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:42.126 13:29:44 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:42.126 13:29:44 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:42.126 13:29:44 -- common/autotest_common.sh@10 -- $ set +x 00:00:42.126 13:29:44 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:42.126 13:29:44 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:42.126 13:29:44 -- pm/common@17 -- $ local monitor 00:00:42.126 13:29:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.126 13:29:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2396922 00:00:42.126 13:29:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.126 13:29:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2396924 00:00:42.126 13:29:44 -- pm/common@21 -- $ date +%s 00:00:42.126 13:29:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.126 13:29:44 -- pm/common@21 -- $ date +%s 00:00:42.126 13:29:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2396926 00:00:42.126 13:29:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.126 13:29:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2396930 00:00:42.126 13:29:44 -- pm/common@21 -- $ date +%s 00:00:42.126 13:29:44 -- pm/common@26 -- $ sleep 1 00:00:42.126 13:29:44 -- pm/common@21 -- $ date +%s 00:00:42.126 13:29:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713439784 00:00:42.126 13:29:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713439784 00:00:42.126 13:29:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713439784 00:00:42.127 13:29:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713439784 00:00:42.127 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713439784_collect-vmstat.pm.log 00:00:42.127 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713439784_collect-bmc-pm.bmc.pm.log 00:00:42.127 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713439784_collect-cpu-load.pm.log 00:00:42.127 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713439784_collect-cpu-temp.pm.log 00:00:43.064 13:29:45 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:43.064 13:29:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:43.064 13:29:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:43.064 13:29:45 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:43.064 13:29:45 -- spdk/autobuild.sh@16 -- $ date -u 00:00:43.064 Thu Apr 18 11:29:45 AM UTC 2024 00:00:43.064 13:29:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:43.064 v24.05-pre-407-g65b4e17c6 00:00:43.064 13:29:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:43.064 13:29:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:43.064 13:29:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:43.064 13:29:45 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:43.064 13:29:45 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:43.064 13:29:45 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.323 ************************************ 00:00:43.323 START TEST ubsan 00:00:43.323 ************************************ 00:00:43.323 13:29:45 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:43.323 using ubsan 00:00:43.323 00:00:43.323 real 0m0.000s 00:00:43.323 user 0m0.000s 00:00:43.323 sys 0m0.000s 00:00:43.323 13:29:45 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:43.323 13:29:45 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.323 ************************************ 00:00:43.323 END TEST ubsan 00:00:43.323 ************************************ 00:00:43.323 13:29:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:43.323 13:29:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:43.323 13:29:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:43.323 13:29:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:43.323 13:29:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:43.323 13:29:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:43.323 13:29:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:43.323 13:29:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:43.323 13:29:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:43.323 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:43.323 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:43.582 Using 'verbs' RDMA provider 00:00:54.127 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:04.108 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:04.108 Creating mk/config.mk...done. 00:01:04.108 Creating mk/cc.flags.mk...done. 00:01:04.108 Type 'make' to build. 00:01:04.108 13:30:06 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:04.108 13:30:06 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:04.108 13:30:06 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:04.108 13:30:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:04.108 ************************************ 00:01:04.108 START TEST make 00:01:04.108 ************************************ 00:01:04.108 13:30:06 -- common/autotest_common.sh@1111 -- $ make -j48 00:01:04.108 make[1]: Nothing to be done for 'all'. 00:01:05.499 The Meson build system 00:01:05.499 Version: 1.3.1 00:01:05.499 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:05.499 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:05.499 Build type: native build 00:01:05.499 Project name: libvfio-user 00:01:05.499 Project version: 0.0.1 00:01:05.499 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:05.499 C linker for the host machine: cc ld.bfd 2.39-16 00:01:05.499 Host machine cpu family: x86_64 00:01:05.499 Host machine cpu: x86_64 00:01:05.499 Run-time dependency threads found: YES 00:01:05.499 Library dl found: YES 00:01:05.499 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:05.499 Run-time dependency json-c found: YES 0.17 00:01:05.499 Run-time dependency cmocka found: YES 1.1.7 00:01:05.499 Program pytest-3 found: NO 00:01:05.499 Program flake8 found: NO 00:01:05.499 Program misspell-fixer found: NO 00:01:05.499 Program restructuredtext-lint found: NO 00:01:05.499 Program valgrind found: YES (/usr/bin/valgrind) 00:01:05.499 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:05.499 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:05.499 Compiler for C supports arguments -Wwrite-strings: YES 00:01:05.499 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:05.499 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:05.499 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:05.499 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:05.499 Build targets in project: 8 00:01:05.499 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:05.499 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:05.499 00:01:05.499 libvfio-user 0.0.1 00:01:05.499 00:01:05.499 User defined options 00:01:05.499 buildtype : debug 00:01:05.499 default_library: shared 00:01:05.499 libdir : /usr/local/lib 00:01:05.499 00:01:05.499 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:06.080 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:06.342 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:06.342 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:06.342 [3/37] Compiling C object samples/null.p/null.c.o 00:01:06.342 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:06.342 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:06.342 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:06.342 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:06.342 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:06.342 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:06.342 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:06.342 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:06.342 [12/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:06.342 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:06.342 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:06.342 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:06.342 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:06.342 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:06.342 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:06.342 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:06.342 [20/37] Compiling C object samples/server.p/server.c.o 00:01:06.342 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:06.343 [22/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:06.603 [23/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:06.603 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:06.603 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:06.603 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:06.603 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:01:06.603 [28/37] Compiling C object samples/client.p/client.c.o 00:01:06.603 [29/37] Linking target samples/client 00:01:06.603 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:06.866 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:06.866 [32/37] Linking target test/unit_tests 00:01:06.866 [33/37] Linking target samples/server 00:01:06.866 [34/37] Linking target samples/null 00:01:06.866 [35/37] Linking target samples/lspci 00:01:06.866 [36/37] Linking target samples/gpio-pci-idio-16 00:01:06.866 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:06.866 INFO: autodetecting backend as ninja 00:01:06.866 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:06.866 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:07.811 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:07.811 ninja: no work to do. 00:01:13.116 The Meson build system 00:01:13.116 Version: 1.3.1 00:01:13.116 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:13.116 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:13.116 Build type: native build 00:01:13.116 Program cat found: YES (/usr/bin/cat) 00:01:13.116 Project name: DPDK 00:01:13.116 Project version: 24.03.0 00:01:13.116 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:13.116 C linker for the host machine: cc ld.bfd 2.39-16 00:01:13.116 Host machine cpu family: x86_64 00:01:13.116 Host machine cpu: x86_64 00:01:13.116 Message: ## Building in Developer Mode ## 00:01:13.116 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:13.116 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:13.116 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:13.116 Program python3 found: YES (/usr/bin/python3) 00:01:13.116 Program cat found: YES (/usr/bin/cat) 00:01:13.116 Compiler for C supports arguments -march=native: YES 00:01:13.116 Checking for size of "void *" : 8 00:01:13.116 Checking for size of "void *" : 8 (cached) 00:01:13.116 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:13.116 Library m found: YES 00:01:13.116 Library numa found: YES 00:01:13.116 Has header "numaif.h" : YES 00:01:13.116 Library fdt found: NO 00:01:13.116 Library execinfo found: NO 00:01:13.116 Has header "execinfo.h" : YES 00:01:13.116 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:13.116 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:13.116 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:13.116 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:13.116 Run-time dependency openssl found: YES 3.0.9 00:01:13.116 Run-time dependency libpcap found: YES 1.10.4 00:01:13.116 Has header "pcap.h" with dependency libpcap: YES 00:01:13.116 Compiler for C supports arguments -Wcast-qual: YES 00:01:13.116 Compiler for C supports arguments -Wdeprecated: YES 00:01:13.116 Compiler for C supports arguments -Wformat: YES 00:01:13.116 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:13.116 Compiler for C supports arguments -Wformat-security: NO 00:01:13.116 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:13.116 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:13.116 Compiler for C supports arguments -Wnested-externs: YES 00:01:13.116 Compiler for C supports arguments -Wold-style-definition: YES 00:01:13.116 Compiler for C supports arguments -Wpointer-arith: YES 00:01:13.116 Compiler for C supports arguments -Wsign-compare: YES 00:01:13.116 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:13.116 Compiler for C supports arguments -Wundef: YES 00:01:13.116 Compiler for C supports arguments -Wwrite-strings: YES 00:01:13.116 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:13.116 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:13.116 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:13.116 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:13.116 Program objdump found: YES (/usr/bin/objdump) 00:01:13.116 Compiler for C supports arguments -mavx512f: YES 00:01:13.116 Checking if "AVX512 checking" compiles: YES 00:01:13.116 Fetching value of define "__SSE4_2__" : 1 00:01:13.116 Fetching value of define "__AES__" : 1 00:01:13.116 Fetching value of define "__AVX__" : 1 00:01:13.116 Fetching value of define "__AVX2__" : (undefined) 00:01:13.116 Fetching value of define "__AVX512BW__" : (undefined) 00:01:13.116 Fetching value of define "__AVX512CD__" : (undefined) 00:01:13.116 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:13.116 Fetching value of define "__AVX512F__" : (undefined) 00:01:13.116 Fetching value of define "__AVX512VL__" : (undefined) 00:01:13.116 Fetching value of define "__PCLMUL__" : 1 00:01:13.116 Fetching value of define "__RDRND__" : 1 00:01:13.116 Fetching value of define "__RDSEED__" : (undefined) 00:01:13.116 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:13.116 Fetching value of define "__znver1__" : (undefined) 00:01:13.116 Fetching value of define "__znver2__" : (undefined) 00:01:13.116 Fetching value of define "__znver3__" : (undefined) 00:01:13.116 Fetching value of define "__znver4__" : (undefined) 00:01:13.116 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:13.116 Message: lib/log: Defining dependency "log" 00:01:13.116 Message: lib/kvargs: Defining dependency "kvargs" 00:01:13.116 Message: lib/telemetry: Defining dependency "telemetry" 00:01:13.116 Checking for function "getentropy" : NO 00:01:13.116 Message: lib/eal: Defining dependency "eal" 00:01:13.116 Message: lib/ring: Defining dependency "ring" 00:01:13.116 Message: lib/rcu: Defining dependency "rcu" 00:01:13.116 Message: lib/mempool: Defining dependency "mempool" 00:01:13.116 Message: lib/mbuf: Defining dependency "mbuf" 00:01:13.116 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:13.116 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:13.116 Compiler for C supports arguments -mpclmul: YES 00:01:13.116 Compiler for C supports arguments -maes: YES 00:01:13.116 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:13.116 Compiler for C supports arguments -mavx512bw: YES 00:01:13.116 Compiler for C supports arguments -mavx512dq: YES 00:01:13.116 Compiler for C supports arguments -mavx512vl: YES 00:01:13.116 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:13.116 Compiler for C supports arguments -mavx2: YES 00:01:13.116 Compiler for C supports arguments -mavx: YES 00:01:13.116 Message: lib/net: Defining dependency "net" 00:01:13.116 Message: lib/meter: Defining dependency "meter" 00:01:13.116 Message: lib/ethdev: Defining dependency "ethdev" 00:01:13.116 Message: lib/pci: Defining dependency "pci" 00:01:13.116 Message: lib/cmdline: Defining dependency "cmdline" 00:01:13.116 Message: lib/hash: Defining dependency "hash" 00:01:13.116 Message: lib/timer: Defining dependency "timer" 00:01:13.116 Message: lib/compressdev: Defining dependency "compressdev" 00:01:13.116 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:13.116 Message: lib/dmadev: Defining dependency "dmadev" 00:01:13.116 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:13.116 Message: lib/power: Defining dependency "power" 00:01:13.116 Message: lib/reorder: Defining dependency "reorder" 00:01:13.116 Message: lib/security: Defining dependency "security" 00:01:13.116 lib/meson.build:163: WARNING: Cannot disable mandatory library "stack" 00:01:13.116 Message: lib/stack: Defining dependency "stack" 00:01:13.116 Has header "linux/userfaultfd.h" : YES 00:01:13.116 Has header "linux/vduse.h" : YES 00:01:13.116 Message: lib/vhost: Defining dependency "vhost" 00:01:13.116 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:13.116 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:13.116 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:13.116 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:13.116 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:13.116 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:13.116 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:13.116 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:13.116 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:13.116 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:13.116 Program doxygen found: YES (/usr/bin/doxygen) 00:01:13.116 Configuring doxy-api-html.conf using configuration 00:01:13.116 Configuring doxy-api-man.conf using configuration 00:01:13.116 Program mandb found: YES (/usr/bin/mandb) 00:01:13.116 Program sphinx-build found: NO 00:01:13.116 Configuring rte_build_config.h using configuration 00:01:13.116 Message: 00:01:13.116 ================= 00:01:13.116 Applications Enabled 00:01:13.116 ================= 00:01:13.116 00:01:13.116 apps: 00:01:13.116 00:01:13.116 00:01:13.116 Message: 00:01:13.116 ================= 00:01:13.116 Libraries Enabled 00:01:13.116 ================= 00:01:13.116 00:01:13.116 libs: 00:01:13.116 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:13.116 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:13.116 cryptodev, dmadev, power, reorder, security, stack, vhost, 00:01:13.116 00:01:13.116 Message: 00:01:13.116 =============== 00:01:13.116 Drivers Enabled 00:01:13.116 =============== 00:01:13.116 00:01:13.116 common: 00:01:13.116 00:01:13.116 bus: 00:01:13.116 pci, vdev, 00:01:13.116 mempool: 00:01:13.116 ring, 00:01:13.116 dma: 00:01:13.116 00:01:13.116 net: 00:01:13.116 00:01:13.116 crypto: 00:01:13.116 00:01:13.116 compress: 00:01:13.116 00:01:13.116 vdpa: 00:01:13.116 00:01:13.116 00:01:13.116 Message: 00:01:13.116 ================= 00:01:13.116 Content Skipped 00:01:13.116 ================= 00:01:13.116 00:01:13.116 apps: 00:01:13.116 dumpcap: explicitly disabled via build config 00:01:13.116 graph: explicitly disabled via build config 00:01:13.116 pdump: explicitly disabled via build config 00:01:13.116 proc-info: explicitly disabled via build config 00:01:13.116 test-acl: explicitly disabled via build config 00:01:13.116 test-bbdev: explicitly disabled via build config 00:01:13.116 test-cmdline: explicitly disabled via build config 00:01:13.116 test-compress-perf: explicitly disabled via build config 00:01:13.116 test-crypto-perf: explicitly disabled via build config 00:01:13.116 test-dma-perf: explicitly disabled via build config 00:01:13.116 test-eventdev: explicitly disabled via build config 00:01:13.116 test-fib: explicitly disabled via build config 00:01:13.116 test-flow-perf: explicitly disabled via build config 00:01:13.116 test-gpudev: explicitly disabled via build config 00:01:13.116 test-mldev: explicitly disabled via build config 00:01:13.116 test-pipeline: explicitly disabled via build config 00:01:13.116 test-pmd: explicitly disabled via build config 00:01:13.116 test-regex: explicitly disabled via build config 00:01:13.116 test-sad: explicitly disabled via build config 00:01:13.116 test-security-perf: explicitly disabled via build config 00:01:13.116 00:01:13.116 libs: 00:01:13.116 argparse: explicitly disabled via build config 00:01:13.116 metrics: explicitly disabled via build config 00:01:13.116 acl: explicitly disabled via build config 00:01:13.116 bbdev: explicitly disabled via build config 00:01:13.116 bitratestats: explicitly disabled via build config 00:01:13.116 bpf: explicitly disabled via build config 00:01:13.116 cfgfile: explicitly disabled via build config 00:01:13.116 distributor: explicitly disabled via build config 00:01:13.117 efd: explicitly disabled via build config 00:01:13.117 eventdev: explicitly disabled via build config 00:01:13.117 dispatcher: explicitly disabled via build config 00:01:13.117 gpudev: explicitly disabled via build config 00:01:13.117 gro: explicitly disabled via build config 00:01:13.117 gso: explicitly disabled via build config 00:01:13.117 ip_frag: explicitly disabled via build config 00:01:13.117 jobstats: explicitly disabled via build config 00:01:13.117 latencystats: explicitly disabled via build config 00:01:13.117 lpm: explicitly disabled via build config 00:01:13.117 member: explicitly disabled via build config 00:01:13.117 pcapng: explicitly disabled via build config 00:01:13.117 rawdev: explicitly disabled via build config 00:01:13.117 regexdev: explicitly disabled via build config 00:01:13.117 mldev: explicitly disabled via build config 00:01:13.117 rib: explicitly disabled via build config 00:01:13.117 sched: explicitly disabled via build config 00:01:13.117 ipsec: explicitly disabled via build config 00:01:13.117 pdcp: explicitly disabled via build config 00:01:13.117 fib: explicitly disabled via build config 00:01:13.117 port: explicitly disabled via build config 00:01:13.117 pdump: explicitly disabled via build config 00:01:13.117 table: explicitly disabled via build config 00:01:13.117 pipeline: explicitly disabled via build config 00:01:13.117 graph: explicitly disabled via build config 00:01:13.117 node: explicitly disabled via build config 00:01:13.117 00:01:13.117 drivers: 00:01:13.117 common/cpt: not in enabled drivers build config 00:01:13.117 common/dpaax: not in enabled drivers build config 00:01:13.117 common/iavf: not in enabled drivers build config 00:01:13.117 common/idpf: not in enabled drivers build config 00:01:13.117 common/ionic: not in enabled drivers build config 00:01:13.117 common/mvep: not in enabled drivers build config 00:01:13.117 common/octeontx: not in enabled drivers build config 00:01:13.117 bus/auxiliary: not in enabled drivers build config 00:01:13.117 bus/cdx: not in enabled drivers build config 00:01:13.117 bus/dpaa: not in enabled drivers build config 00:01:13.117 bus/fslmc: not in enabled drivers build config 00:01:13.117 bus/ifpga: not in enabled drivers build config 00:01:13.117 bus/platform: not in enabled drivers build config 00:01:13.117 bus/uacce: not in enabled drivers build config 00:01:13.117 bus/vmbus: not in enabled drivers build config 00:01:13.117 common/cnxk: not in enabled drivers build config 00:01:13.117 common/mlx5: not in enabled drivers build config 00:01:13.117 common/nfp: not in enabled drivers build config 00:01:13.117 common/nitrox: not in enabled drivers build config 00:01:13.117 common/qat: not in enabled drivers build config 00:01:13.117 common/sfc_efx: not in enabled drivers build config 00:01:13.117 mempool/bucket: not in enabled drivers build config 00:01:13.117 mempool/cnxk: not in enabled drivers build config 00:01:13.117 mempool/dpaa: not in enabled drivers build config 00:01:13.117 mempool/dpaa2: not in enabled drivers build config 00:01:13.117 mempool/octeontx: not in enabled drivers build config 00:01:13.117 mempool/stack: not in enabled drivers build config 00:01:13.117 dma/cnxk: not in enabled drivers build config 00:01:13.117 dma/dpaa: not in enabled drivers build config 00:01:13.117 dma/dpaa2: not in enabled drivers build config 00:01:13.117 dma/hisilicon: not in enabled drivers build config 00:01:13.117 dma/idxd: not in enabled drivers build config 00:01:13.117 dma/ioat: not in enabled drivers build config 00:01:13.117 dma/skeleton: not in enabled drivers build config 00:01:13.117 net/af_packet: not in enabled drivers build config 00:01:13.117 net/af_xdp: not in enabled drivers build config 00:01:13.117 net/ark: not in enabled drivers build config 00:01:13.117 net/atlantic: not in enabled drivers build config 00:01:13.117 net/avp: not in enabled drivers build config 00:01:13.117 net/axgbe: not in enabled drivers build config 00:01:13.117 net/bnx2x: not in enabled drivers build config 00:01:13.117 net/bnxt: not in enabled drivers build config 00:01:13.117 net/bonding: not in enabled drivers build config 00:01:13.117 net/cnxk: not in enabled drivers build config 00:01:13.117 net/cpfl: not in enabled drivers build config 00:01:13.117 net/cxgbe: not in enabled drivers build config 00:01:13.117 net/dpaa: not in enabled drivers build config 00:01:13.117 net/dpaa2: not in enabled drivers build config 00:01:13.117 net/e1000: not in enabled drivers build config 00:01:13.117 net/ena: not in enabled drivers build config 00:01:13.117 net/enetc: not in enabled drivers build config 00:01:13.117 net/enetfec: not in enabled drivers build config 00:01:13.117 net/enic: not in enabled drivers build config 00:01:13.117 net/failsafe: not in enabled drivers build config 00:01:13.117 net/fm10k: not in enabled drivers build config 00:01:13.117 net/gve: not in enabled drivers build config 00:01:13.117 net/hinic: not in enabled drivers build config 00:01:13.117 net/hns3: not in enabled drivers build config 00:01:13.117 net/i40e: not in enabled drivers build config 00:01:13.117 net/iavf: not in enabled drivers build config 00:01:13.117 net/ice: not in enabled drivers build config 00:01:13.117 net/idpf: not in enabled drivers build config 00:01:13.117 net/igc: not in enabled drivers build config 00:01:13.117 net/ionic: not in enabled drivers build config 00:01:13.117 net/ipn3ke: not in enabled drivers build config 00:01:13.117 net/ixgbe: not in enabled drivers build config 00:01:13.117 net/mana: not in enabled drivers build config 00:01:13.117 net/memif: not in enabled drivers build config 00:01:13.117 net/mlx4: not in enabled drivers build config 00:01:13.117 net/mlx5: not in enabled drivers build config 00:01:13.117 net/mvneta: not in enabled drivers build config 00:01:13.117 net/mvpp2: not in enabled drivers build config 00:01:13.117 net/netvsc: not in enabled drivers build config 00:01:13.117 net/nfb: not in enabled drivers build config 00:01:13.117 net/nfp: not in enabled drivers build config 00:01:13.117 net/ngbe: not in enabled drivers build config 00:01:13.117 net/null: not in enabled drivers build config 00:01:13.117 net/octeontx: not in enabled drivers build config 00:01:13.117 net/octeon_ep: not in enabled drivers build config 00:01:13.117 net/pcap: not in enabled drivers build config 00:01:13.117 net/pfe: not in enabled drivers build config 00:01:13.117 net/qede: not in enabled drivers build config 00:01:13.117 net/ring: not in enabled drivers build config 00:01:13.117 net/sfc: not in enabled drivers build config 00:01:13.117 net/softnic: not in enabled drivers build config 00:01:13.117 net/tap: not in enabled drivers build config 00:01:13.117 net/thunderx: not in enabled drivers build config 00:01:13.117 net/txgbe: not in enabled drivers build config 00:01:13.117 net/vdev_netvsc: not in enabled drivers build config 00:01:13.117 net/vhost: not in enabled drivers build config 00:01:13.117 net/virtio: not in enabled drivers build config 00:01:13.117 net/vmxnet3: not in enabled drivers build config 00:01:13.117 raw/*: missing internal dependency, "rawdev" 00:01:13.117 crypto/armv8: not in enabled drivers build config 00:01:13.117 crypto/bcmfs: not in enabled drivers build config 00:01:13.117 crypto/caam_jr: not in enabled drivers build config 00:01:13.117 crypto/ccp: not in enabled drivers build config 00:01:13.117 crypto/cnxk: not in enabled drivers build config 00:01:13.117 crypto/dpaa_sec: not in enabled drivers build config 00:01:13.117 crypto/dpaa2_sec: not in enabled drivers build config 00:01:13.117 crypto/ipsec_mb: not in enabled drivers build config 00:01:13.117 crypto/mlx5: not in enabled drivers build config 00:01:13.117 crypto/mvsam: not in enabled drivers build config 00:01:13.117 crypto/nitrox: not in enabled drivers build config 00:01:13.117 crypto/null: not in enabled drivers build config 00:01:13.117 crypto/octeontx: not in enabled drivers build config 00:01:13.117 crypto/openssl: not in enabled drivers build config 00:01:13.117 crypto/scheduler: not in enabled drivers build config 00:01:13.117 crypto/uadk: not in enabled drivers build config 00:01:13.117 crypto/virtio: not in enabled drivers build config 00:01:13.117 compress/isal: not in enabled drivers build config 00:01:13.117 compress/mlx5: not in enabled drivers build config 00:01:13.117 compress/nitrox: not in enabled drivers build config 00:01:13.117 compress/octeontx: not in enabled drivers build config 00:01:13.117 compress/zlib: not in enabled drivers build config 00:01:13.117 regex/*: missing internal dependency, "regexdev" 00:01:13.117 ml/*: missing internal dependency, "mldev" 00:01:13.117 vdpa/ifc: not in enabled drivers build config 00:01:13.117 vdpa/mlx5: not in enabled drivers build config 00:01:13.117 vdpa/nfp: not in enabled drivers build config 00:01:13.117 vdpa/sfc: not in enabled drivers build config 00:01:13.117 event/*: missing internal dependency, "eventdev" 00:01:13.117 baseband/*: missing internal dependency, "bbdev" 00:01:13.117 gpu/*: missing internal dependency, "gpudev" 00:01:13.117 00:01:13.117 00:01:13.117 Build targets in project: 88 00:01:13.117 00:01:13.117 DPDK 24.03.0 00:01:13.117 00:01:13.117 User defined options 00:01:13.117 buildtype : debug 00:01:13.117 default_library : shared 00:01:13.117 libdir : lib 00:01:13.117 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:13.117 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:13.117 c_link_args : 00:01:13.117 cpu_instruction_set: native 00:01:13.117 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:13.117 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib,argparse 00:01:13.117 enable_docs : false 00:01:13.117 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:13.117 enable_kmods : false 00:01:13.117 tests : false 00:01:13.117 00:01:13.117 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:13.117 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:13.117 [1/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:13.117 [2/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:13.117 [3/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:13.117 [4/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:13.117 [5/274] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:13.117 [6/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:13.117 [7/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:13.117 [8/274] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:13.117 [9/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:13.117 [10/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:13.117 [11/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:13.117 [12/274] Linking static target lib/librte_kvargs.a 00:01:13.379 [13/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:13.379 [14/274] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:13.379 [15/274] Linking static target lib/librte_log.a 00:01:13.379 [16/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:13.951 [17/274] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.951 [18/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:13.951 [19/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:13.951 [20/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:13.951 [21/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:13.951 [22/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:13.951 [23/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:13.951 [24/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:13.951 [25/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:13.951 [26/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:13.951 [27/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:13.951 [28/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:14.214 [29/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:14.214 [30/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:14.214 [31/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:14.214 [32/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:14.214 [33/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:14.214 [34/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:14.214 [35/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:14.214 [36/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:14.214 [37/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:14.214 [38/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:14.214 [39/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:14.214 [40/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:14.214 [41/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:14.214 [42/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:14.214 [43/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:14.214 [44/274] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:14.214 [45/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:14.214 [46/274] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:14.214 [47/274] Linking static target lib/librte_telemetry.a 00:01:14.214 [48/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:14.214 [49/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:14.214 [50/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:14.214 [51/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:14.214 [52/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:14.214 [53/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:14.214 [54/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:14.214 [55/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:14.214 [56/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:14.214 [57/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:14.214 [58/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:14.214 [59/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:14.214 [60/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:14.476 [61/274] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.476 [62/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:14.476 [63/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:14.476 [64/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:14.476 [65/274] Linking target lib/librte_log.so.24.1 00:01:14.476 [66/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:14.476 [67/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:14.735 [68/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:14.735 [69/274] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:14.735 [70/274] Linking static target lib/librte_pci.a 00:01:14.735 [71/274] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:14.735 [72/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:14.735 [73/274] Linking target lib/librte_kvargs.so.24.1 00:01:15.002 [74/274] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:15.002 [75/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:15.002 [76/274] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:15.002 [77/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:15.002 [78/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:15.002 [79/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:15.002 [80/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:15.002 [81/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:15.002 [82/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:15.002 [83/274] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:15.002 [84/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:15.002 [85/274] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.002 [86/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:15.002 [87/274] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:15.002 [88/274] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:15.002 [89/274] Linking static target lib/librte_meter.a 00:01:15.002 [90/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:15.002 [91/274] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:15.002 [92/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:15.002 [93/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:15.002 [94/274] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:15.002 [95/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:15.264 [96/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:15.264 [97/274] Linking target lib/librte_telemetry.so.24.1 00:01:15.264 [98/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:15.264 [99/274] Linking static target lib/librte_ring.a 00:01:15.264 [100/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:15.264 [101/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:15.264 [102/274] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:15.264 [103/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:15.264 [104/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:15.264 [105/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:15.264 [106/274] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:15.264 [107/274] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:15.264 [108/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:15.264 [109/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:15.264 [110/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:15.264 [111/274] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.264 [112/274] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:15.264 [113/274] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:15.264 [114/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:15.264 [115/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:15.264 [116/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:15.264 [117/274] Linking static target lib/librte_rcu.a 00:01:15.264 [118/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:15.264 [119/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:15.264 [120/274] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:15.264 [121/274] Linking static target lib/librte_mempool.a 00:01:15.264 [122/274] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:15.264 [123/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:15.264 [124/274] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:15.264 [125/274] Linking static target lib/librte_eal.a 00:01:15.526 [126/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:15.526 [127/274] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:15.526 [128/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:15.526 [129/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:15.526 [130/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:15.526 [131/274] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.526 [132/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:15.793 [133/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:15.793 [134/274] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:15.793 [135/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:15.793 [136/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:15.793 [137/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:15.793 [138/274] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.054 [139/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:16.054 [140/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:16.054 [141/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:16.054 [142/274] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.054 [143/274] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:16.054 [144/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:16.054 [145/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:16.054 [146/274] Linking static target lib/librte_cmdline.a 00:01:16.054 [147/274] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:16.054 [148/274] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:16.054 [149/274] Linking static target lib/librte_net.a 00:01:16.054 [150/274] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:16.054 [151/274] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:16.054 [152/274] Linking static target lib/librte_timer.a 00:01:16.054 [153/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:16.054 [154/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:16.313 [155/274] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:16.313 [156/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:16.313 [157/274] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:16.313 [158/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:16.313 [159/274] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:16.313 [160/274] Linking static target lib/librte_dmadev.a 00:01:16.313 [161/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:16.313 [162/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:16.313 [163/274] Linking static target lib/librte_stack.a 00:01:16.313 [164/274] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:16.313 [165/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:16.313 [166/274] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:16.313 [167/274] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:16.313 [168/274] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:16.313 [169/274] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:16.572 [170/274] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:16.572 [171/274] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.572 [172/274] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.572 [173/274] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:16.572 [174/274] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:16.572 [175/274] Linking static target lib/librte_power.a 00:01:16.572 [176/274] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.572 [177/274] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:16.572 [178/274] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.572 [179/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:16.572 [180/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:16.572 [181/274] Linking static target lib/librte_compressdev.a 00:01:16.830 [182/274] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:16.830 [183/274] Linking static target lib/librte_hash.a 00:01:16.830 [184/274] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:16.830 [185/274] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:16.830 [186/274] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:16.830 [187/274] Linking static target lib/librte_reorder.a 00:01:16.830 [188/274] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.830 [189/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:16.830 [190/274] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:16.830 [191/274] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:16.830 [192/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:16.830 [193/274] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:16.830 [194/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:16.830 [195/274] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:16.830 [196/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:16.830 [197/274] Linking static target lib/librte_mbuf.a 00:01:17.090 [198/274] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.090 [199/274] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:17.090 [200/274] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:17.090 [201/274] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:17.090 [202/274] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:17.090 [203/274] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:17.090 [204/274] Linking static target drivers/librte_bus_vdev.a 00:01:17.090 [205/274] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.090 [206/274] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.090 [207/274] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.090 [208/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:17.090 [209/274] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:17.090 [210/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:17.348 [211/274] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:17.348 [212/274] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:17.348 [213/274] Linking static target lib/librte_security.a 00:01:17.348 [214/274] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:17.348 [215/274] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:17.348 [216/274] Linking static target drivers/librte_mempool_ring.a 00:01:17.348 [217/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:17.348 [218/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:17.348 [219/274] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.348 [220/274] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.348 [221/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:17.348 [222/274] Linking static target lib/librte_ethdev.a 00:01:17.348 [223/274] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.348 [224/274] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:17.348 [225/274] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:17.348 [226/274] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:17.348 [227/274] Linking static target drivers/librte_bus_pci.a 00:01:17.606 [228/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:17.606 [229/274] Linking static target lib/librte_cryptodev.a 00:01:17.606 [230/274] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.864 [231/274] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.799 [232/274] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.733 [233/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:21.634 [234/274] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.634 [235/274] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.634 [236/274] Linking target lib/librte_eal.so.24.1 00:01:21.634 [237/274] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:21.634 [238/274] Linking target lib/librte_ring.so.24.1 00:01:21.634 [239/274] Linking target lib/librte_dmadev.so.24.1 00:01:21.634 [240/274] Linking target drivers/librte_bus_vdev.so.24.1 00:01:21.634 [241/274] Linking target lib/librte_timer.so.24.1 00:01:21.634 [242/274] Linking target lib/librte_meter.so.24.1 00:01:21.634 [243/274] Linking target lib/librte_pci.so.24.1 00:01:21.634 [244/274] Linking target lib/librte_stack.so.24.1 00:01:21.893 [245/274] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:21.893 [246/274] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:21.893 [247/274] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:21.893 [248/274] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:21.893 [249/274] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:21.893 [250/274] Linking target lib/librte_rcu.so.24.1 00:01:21.893 [251/274] Linking target lib/librte_mempool.so.24.1 00:01:21.893 [252/274] Linking target drivers/librte_bus_pci.so.24.1 00:01:22.151 [253/274] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:22.151 [254/274] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:22.151 [255/274] Linking target drivers/librte_mempool_ring.so.24.1 00:01:22.151 [256/274] Linking target lib/librte_mbuf.so.24.1 00:01:22.151 [257/274] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:22.151 [258/274] Linking target lib/librte_compressdev.so.24.1 00:01:22.151 [259/274] Linking target lib/librte_reorder.so.24.1 00:01:22.151 [260/274] Linking target lib/librte_net.so.24.1 00:01:22.151 [261/274] Linking target lib/librte_cryptodev.so.24.1 00:01:22.409 [262/274] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:22.409 [263/274] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:22.409 [264/274] Linking target lib/librte_hash.so.24.1 00:01:22.409 [265/274] Linking target lib/librte_security.so.24.1 00:01:22.409 [266/274] Linking target lib/librte_cmdline.so.24.1 00:01:22.409 [267/274] Linking target lib/librte_ethdev.so.24.1 00:01:22.409 [268/274] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:22.668 [269/274] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:22.668 [270/274] Linking target lib/librte_power.so.24.1 00:01:25.980 [271/274] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:25.980 [272/274] Linking static target lib/librte_vhost.a 00:01:26.550 [273/274] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.550 [274/274] Linking target lib/librte_vhost.so.24.1 00:01:26.550 INFO: autodetecting backend as ninja 00:01:26.550 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:27.486 CC lib/log/log.o 00:01:27.486 CC lib/ut/ut.o 00:01:27.486 CC lib/log/log_flags.o 00:01:27.486 CC lib/log/log_deprecated.o 00:01:27.486 CC lib/ut_mock/mock.o 00:01:27.743 LIB libspdk_ut_mock.a 00:01:27.743 LIB libspdk_log.a 00:01:27.743 SO libspdk_ut_mock.so.6.0 00:01:27.743 LIB libspdk_ut.a 00:01:27.743 SO libspdk_ut.so.2.0 00:01:27.743 SO libspdk_log.so.7.0 00:01:27.743 SYMLINK libspdk_ut_mock.so 00:01:27.743 SYMLINK libspdk_ut.so 00:01:27.743 SYMLINK libspdk_log.so 00:01:28.002 CC lib/dma/dma.o 00:01:28.002 CC lib/ioat/ioat.o 00:01:28.002 CXX lib/trace_parser/trace.o 00:01:28.002 CC lib/util/base64.o 00:01:28.002 CC lib/util/bit_array.o 00:01:28.002 CC lib/util/cpuset.o 00:01:28.002 CC lib/util/crc16.o 00:01:28.002 CC lib/util/crc32.o 00:01:28.002 CC lib/util/crc32c.o 00:01:28.002 CC lib/util/crc32_ieee.o 00:01:28.002 CC lib/util/crc64.o 00:01:28.002 CC lib/util/dif.o 00:01:28.002 CC lib/util/fd.o 00:01:28.002 CC lib/util/file.o 00:01:28.002 CC lib/util/hexlify.o 00:01:28.002 CC lib/util/iov.o 00:01:28.002 CC lib/util/math.o 00:01:28.002 CC lib/util/pipe.o 00:01:28.002 CC lib/util/strerror_tls.o 00:01:28.002 CC lib/util/string.o 00:01:28.002 CC lib/util/uuid.o 00:01:28.002 CC lib/util/fd_group.o 00:01:28.002 CC lib/util/xor.o 00:01:28.002 CC lib/util/zipf.o 00:01:28.002 CC lib/vfio_user/host/vfio_user_pci.o 00:01:28.002 CC lib/vfio_user/host/vfio_user.o 00:01:28.260 LIB libspdk_dma.a 00:01:28.260 LIB libspdk_ioat.a 00:01:28.260 SO libspdk_dma.so.4.0 00:01:28.260 SO libspdk_ioat.so.7.0 00:01:28.260 LIB libspdk_vfio_user.a 00:01:28.260 SYMLINK libspdk_dma.so 00:01:28.260 SYMLINK libspdk_ioat.so 00:01:28.260 SO libspdk_vfio_user.so.5.0 00:01:28.518 SYMLINK libspdk_vfio_user.so 00:01:28.518 LIB libspdk_util.a 00:01:28.518 SO libspdk_util.so.9.0 00:01:28.775 SYMLINK libspdk_util.so 00:01:29.033 CC lib/rdma/common.o 00:01:29.033 CC lib/env_dpdk/env.o 00:01:29.033 CC lib/idxd/idxd.o 00:01:29.033 CC lib/rdma/rdma_verbs.o 00:01:29.033 CC lib/json/json_parse.o 00:01:29.033 CC lib/env_dpdk/memory.o 00:01:29.033 CC lib/conf/conf.o 00:01:29.033 CC lib/idxd/idxd_user.o 00:01:29.033 CC lib/vmd/vmd.o 00:01:29.033 CC lib/json/json_util.o 00:01:29.033 CC lib/env_dpdk/pci.o 00:01:29.033 CC lib/json/json_write.o 00:01:29.033 CC lib/vmd/led.o 00:01:29.033 CC lib/env_dpdk/init.o 00:01:29.033 LIB libspdk_trace_parser.a 00:01:29.033 CC lib/env_dpdk/threads.o 00:01:29.033 CC lib/env_dpdk/pci_ioat.o 00:01:29.033 CC lib/env_dpdk/pci_virtio.o 00:01:29.033 CC lib/env_dpdk/pci_vmd.o 00:01:29.033 CC lib/env_dpdk/pci_idxd.o 00:01:29.033 CC lib/env_dpdk/pci_event.o 00:01:29.033 CC lib/env_dpdk/sigbus_handler.o 00:01:29.033 CC lib/env_dpdk/pci_dpdk.o 00:01:29.033 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:29.033 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:29.033 SO libspdk_trace_parser.so.5.0 00:01:29.033 SYMLINK libspdk_trace_parser.so 00:01:29.292 LIB libspdk_conf.a 00:01:29.292 SO libspdk_conf.so.6.0 00:01:29.292 LIB libspdk_rdma.a 00:01:29.292 LIB libspdk_json.a 00:01:29.292 SYMLINK libspdk_conf.so 00:01:29.292 SO libspdk_rdma.so.6.0 00:01:29.292 SO libspdk_json.so.6.0 00:01:29.292 SYMLINK libspdk_rdma.so 00:01:29.292 SYMLINK libspdk_json.so 00:01:29.550 LIB libspdk_idxd.a 00:01:29.550 CC lib/jsonrpc/jsonrpc_server.o 00:01:29.550 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:29.550 CC lib/jsonrpc/jsonrpc_client.o 00:01:29.550 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:29.550 SO libspdk_idxd.so.12.0 00:01:29.550 SYMLINK libspdk_idxd.so 00:01:29.550 LIB libspdk_vmd.a 00:01:29.550 SO libspdk_vmd.so.6.0 00:01:29.808 SYMLINK libspdk_vmd.so 00:01:29.808 LIB libspdk_jsonrpc.a 00:01:29.808 SO libspdk_jsonrpc.so.6.0 00:01:29.808 SYMLINK libspdk_jsonrpc.so 00:01:30.066 CC lib/rpc/rpc.o 00:01:30.324 LIB libspdk_rpc.a 00:01:30.324 SO libspdk_rpc.so.6.0 00:01:30.324 SYMLINK libspdk_rpc.so 00:01:30.582 CC lib/keyring/keyring.o 00:01:30.582 CC lib/keyring/keyring_rpc.o 00:01:30.582 CC lib/notify/notify.o 00:01:30.582 CC lib/notify/notify_rpc.o 00:01:30.582 CC lib/trace/trace.o 00:01:30.582 CC lib/trace/trace_flags.o 00:01:30.582 CC lib/trace/trace_rpc.o 00:01:30.582 LIB libspdk_notify.a 00:01:30.582 SO libspdk_notify.so.6.0 00:01:30.582 LIB libspdk_keyring.a 00:01:30.582 LIB libspdk_trace.a 00:01:30.840 SYMLINK libspdk_notify.so 00:01:30.840 SO libspdk_keyring.so.1.0 00:01:30.840 SO libspdk_trace.so.10.0 00:01:30.840 SYMLINK libspdk_keyring.so 00:01:30.840 SYMLINK libspdk_trace.so 00:01:30.840 LIB libspdk_env_dpdk.a 00:01:30.840 CC lib/thread/thread.o 00:01:30.840 CC lib/sock/sock.o 00:01:30.840 CC lib/thread/iobuf.o 00:01:30.840 CC lib/sock/sock_rpc.o 00:01:31.098 SO libspdk_env_dpdk.so.14.0 00:01:31.098 SYMLINK libspdk_env_dpdk.so 00:01:31.356 LIB libspdk_sock.a 00:01:31.356 SO libspdk_sock.so.9.0 00:01:31.356 SYMLINK libspdk_sock.so 00:01:31.614 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:31.614 CC lib/nvme/nvme_ctrlr.o 00:01:31.614 CC lib/nvme/nvme_fabric.o 00:01:31.614 CC lib/nvme/nvme_ns_cmd.o 00:01:31.614 CC lib/nvme/nvme_ns.o 00:01:31.614 CC lib/nvme/nvme_pcie_common.o 00:01:31.614 CC lib/nvme/nvme_pcie.o 00:01:31.614 CC lib/nvme/nvme_qpair.o 00:01:31.614 CC lib/nvme/nvme.o 00:01:31.614 CC lib/nvme/nvme_quirks.o 00:01:31.614 CC lib/nvme/nvme_transport.o 00:01:31.614 CC lib/nvme/nvme_discovery.o 00:01:31.614 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:31.614 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:31.614 CC lib/nvme/nvme_tcp.o 00:01:31.614 CC lib/nvme/nvme_opal.o 00:01:31.614 CC lib/nvme/nvme_io_msg.o 00:01:31.614 CC lib/nvme/nvme_poll_group.o 00:01:31.614 CC lib/nvme/nvme_zns.o 00:01:31.614 CC lib/nvme/nvme_stubs.o 00:01:31.614 CC lib/nvme/nvme_auth.o 00:01:31.614 CC lib/nvme/nvme_cuse.o 00:01:31.614 CC lib/nvme/nvme_vfio_user.o 00:01:31.615 CC lib/nvme/nvme_rdma.o 00:01:32.548 LIB libspdk_thread.a 00:01:32.548 SO libspdk_thread.so.10.0 00:01:32.548 SYMLINK libspdk_thread.so 00:01:32.807 CC lib/accel/accel.o 00:01:32.807 CC lib/blob/blobstore.o 00:01:32.807 CC lib/vfu_tgt/tgt_endpoint.o 00:01:32.807 CC lib/accel/accel_rpc.o 00:01:32.807 CC lib/blob/request.o 00:01:32.807 CC lib/vfu_tgt/tgt_rpc.o 00:01:32.807 CC lib/blob/zeroes.o 00:01:32.807 CC lib/accel/accel_sw.o 00:01:32.807 CC lib/blob/blob_bs_dev.o 00:01:32.807 CC lib/init/json_config.o 00:01:32.807 CC lib/init/subsystem.o 00:01:32.807 CC lib/virtio/virtio.o 00:01:32.807 CC lib/init/subsystem_rpc.o 00:01:32.807 CC lib/virtio/virtio_vhost_user.o 00:01:32.807 CC lib/init/rpc.o 00:01:32.807 CC lib/virtio/virtio_vfio_user.o 00:01:32.807 CC lib/virtio/virtio_pci.o 00:01:33.065 LIB libspdk_init.a 00:01:33.065 SO libspdk_init.so.5.0 00:01:33.065 LIB libspdk_vfu_tgt.a 00:01:33.065 LIB libspdk_virtio.a 00:01:33.065 SYMLINK libspdk_init.so 00:01:33.065 SO libspdk_vfu_tgt.so.3.0 00:01:33.065 SO libspdk_virtio.so.7.0 00:01:33.323 SYMLINK libspdk_vfu_tgt.so 00:01:33.323 SYMLINK libspdk_virtio.so 00:01:33.323 CC lib/event/app.o 00:01:33.323 CC lib/event/reactor.o 00:01:33.323 CC lib/event/log_rpc.o 00:01:33.324 CC lib/event/app_rpc.o 00:01:33.324 CC lib/event/scheduler_static.o 00:01:33.890 LIB libspdk_event.a 00:01:33.890 SO libspdk_event.so.13.0 00:01:33.890 SYMLINK libspdk_event.so 00:01:33.890 LIB libspdk_accel.a 00:01:33.890 SO libspdk_accel.so.15.0 00:01:33.890 LIB libspdk_nvme.a 00:01:33.890 SYMLINK libspdk_accel.so 00:01:34.148 SO libspdk_nvme.so.13.0 00:01:34.148 CC lib/bdev/bdev.o 00:01:34.148 CC lib/bdev/bdev_rpc.o 00:01:34.148 CC lib/bdev/bdev_zone.o 00:01:34.148 CC lib/bdev/part.o 00:01:34.148 CC lib/bdev/scsi_nvme.o 00:01:34.406 SYMLINK libspdk_nvme.so 00:01:35.780 LIB libspdk_blob.a 00:01:35.780 SO libspdk_blob.so.11.0 00:01:35.780 SYMLINK libspdk_blob.so 00:01:36.038 CC lib/lvol/lvol.o 00:01:36.038 CC lib/blobfs/blobfs.o 00:01:36.038 CC lib/blobfs/tree.o 00:01:36.612 LIB libspdk_bdev.a 00:01:36.612 SO libspdk_bdev.so.15.0 00:01:36.612 LIB libspdk_blobfs.a 00:01:36.886 SYMLINK libspdk_bdev.so 00:01:36.886 SO libspdk_blobfs.so.10.0 00:01:36.886 LIB libspdk_lvol.a 00:01:36.886 SO libspdk_lvol.so.10.0 00:01:36.886 SYMLINK libspdk_blobfs.so 00:01:36.886 SYMLINK libspdk_lvol.so 00:01:36.886 CC lib/nvmf/ctrlr.o 00:01:36.886 CC lib/scsi/dev.o 00:01:36.886 CC lib/ublk/ublk.o 00:01:36.886 CC lib/nbd/nbd.o 00:01:36.886 CC lib/scsi/lun.o 00:01:36.886 CC lib/nvmf/ctrlr_discovery.o 00:01:36.886 CC lib/nbd/nbd_rpc.o 00:01:36.886 CC lib/ublk/ublk_rpc.o 00:01:36.886 CC lib/scsi/port.o 00:01:36.886 CC lib/ftl/ftl_core.o 00:01:36.886 CC lib/nvmf/ctrlr_bdev.o 00:01:36.886 CC lib/ftl/ftl_init.o 00:01:36.886 CC lib/nvmf/subsystem.o 00:01:36.886 CC lib/scsi/scsi.o 00:01:36.886 CC lib/ftl/ftl_layout.o 00:01:36.886 CC lib/nvmf/nvmf.o 00:01:36.886 CC lib/scsi/scsi_bdev.o 00:01:36.886 CC lib/ftl/ftl_debug.o 00:01:36.886 CC lib/nvmf/nvmf_rpc.o 00:01:36.886 CC lib/scsi/scsi_pr.o 00:01:36.886 CC lib/ftl/ftl_io.o 00:01:36.886 CC lib/scsi/scsi_rpc.o 00:01:36.886 CC lib/nvmf/transport.o 00:01:36.886 CC lib/ftl/ftl_sb.o 00:01:36.886 CC lib/scsi/task.o 00:01:36.886 CC lib/nvmf/tcp.o 00:01:36.886 CC lib/ftl/ftl_l2p_flat.o 00:01:36.886 CC lib/ftl/ftl_l2p.o 00:01:36.886 CC lib/nvmf/vfio_user.o 00:01:36.886 CC lib/ftl/ftl_nv_cache.o 00:01:36.886 CC lib/ftl/ftl_band.o 00:01:36.886 CC lib/nvmf/rdma.o 00:01:36.886 CC lib/ftl/ftl_band_ops.o 00:01:36.886 CC lib/ftl/ftl_writer.o 00:01:36.886 CC lib/ftl/ftl_reloc.o 00:01:36.886 CC lib/ftl/ftl_rq.o 00:01:36.886 CC lib/ftl/ftl_l2p_cache.o 00:01:36.886 CC lib/ftl/ftl_p2l.o 00:01:36.886 CC lib/ftl/mngt/ftl_mngt.o 00:01:36.886 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:36.886 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:36.886 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:36.886 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:36.886 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:36.886 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:36.886 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:36.886 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:36.886 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:37.146 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:37.411 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:37.411 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:37.411 CC lib/ftl/utils/ftl_conf.o 00:01:37.411 CC lib/ftl/utils/ftl_md.o 00:01:37.411 CC lib/ftl/utils/ftl_mempool.o 00:01:37.411 CC lib/ftl/utils/ftl_bitmap.o 00:01:37.411 CC lib/ftl/utils/ftl_property.o 00:01:37.411 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:37.411 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:37.411 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:37.411 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:37.411 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:37.411 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:37.411 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:37.411 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:37.411 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:37.411 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:37.411 CC lib/ftl/base/ftl_base_dev.o 00:01:37.411 CC lib/ftl/base/ftl_base_bdev.o 00:01:37.411 CC lib/ftl/ftl_trace.o 00:01:37.674 LIB libspdk_nbd.a 00:01:37.674 SO libspdk_nbd.so.7.0 00:01:37.674 LIB libspdk_scsi.a 00:01:37.932 SYMLINK libspdk_nbd.so 00:01:37.932 SO libspdk_scsi.so.9.0 00:01:37.932 SYMLINK libspdk_scsi.so 00:01:37.932 LIB libspdk_ublk.a 00:01:37.932 SO libspdk_ublk.so.3.0 00:01:37.932 SYMLINK libspdk_ublk.so 00:01:38.190 CC lib/vhost/vhost.o 00:01:38.190 CC lib/iscsi/conn.o 00:01:38.190 CC lib/vhost/vhost_rpc.o 00:01:38.190 CC lib/iscsi/init_grp.o 00:01:38.190 CC lib/vhost/vhost_scsi.o 00:01:38.190 CC lib/iscsi/iscsi.o 00:01:38.190 CC lib/vhost/vhost_blk.o 00:01:38.190 CC lib/iscsi/md5.o 00:01:38.190 CC lib/vhost/rte_vhost_user.o 00:01:38.190 CC lib/iscsi/param.o 00:01:38.190 CC lib/iscsi/portal_grp.o 00:01:38.190 CC lib/iscsi/tgt_node.o 00:01:38.190 CC lib/iscsi/iscsi_subsystem.o 00:01:38.190 CC lib/iscsi/iscsi_rpc.o 00:01:38.190 CC lib/iscsi/task.o 00:01:38.190 LIB libspdk_ftl.a 00:01:38.449 SO libspdk_ftl.so.9.0 00:01:38.712 SYMLINK libspdk_ftl.so 00:01:39.323 LIB libspdk_vhost.a 00:01:39.323 SO libspdk_vhost.so.8.0 00:01:39.323 LIB libspdk_nvmf.a 00:01:39.323 SYMLINK libspdk_vhost.so 00:01:39.581 SO libspdk_nvmf.so.18.0 00:01:39.581 LIB libspdk_iscsi.a 00:01:39.581 SO libspdk_iscsi.so.8.0 00:01:39.581 SYMLINK libspdk_nvmf.so 00:01:39.839 SYMLINK libspdk_iscsi.so 00:01:40.098 CC module/env_dpdk/env_dpdk_rpc.o 00:01:40.098 CC module/vfu_device/vfu_virtio.o 00:01:40.098 CC module/vfu_device/vfu_virtio_blk.o 00:01:40.098 CC module/vfu_device/vfu_virtio_scsi.o 00:01:40.098 CC module/vfu_device/vfu_virtio_rpc.o 00:01:40.098 CC module/accel/error/accel_error.o 00:01:40.098 CC module/keyring/file/keyring.o 00:01:40.098 CC module/accel/iaa/accel_iaa.o 00:01:40.098 CC module/keyring/file/keyring_rpc.o 00:01:40.098 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:40.098 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:40.098 CC module/accel/error/accel_error_rpc.o 00:01:40.098 CC module/accel/ioat/accel_ioat.o 00:01:40.098 CC module/sock/posix/posix.o 00:01:40.098 CC module/blob/bdev/blob_bdev.o 00:01:40.098 CC module/accel/ioat/accel_ioat_rpc.o 00:01:40.098 CC module/accel/iaa/accel_iaa_rpc.o 00:01:40.098 CC module/accel/dsa/accel_dsa.o 00:01:40.098 CC module/scheduler/gscheduler/gscheduler.o 00:01:40.098 CC module/accel/dsa/accel_dsa_rpc.o 00:01:40.098 LIB libspdk_env_dpdk_rpc.a 00:01:40.098 SO libspdk_env_dpdk_rpc.so.6.0 00:01:40.098 SYMLINK libspdk_env_dpdk_rpc.so 00:01:40.356 LIB libspdk_keyring_file.a 00:01:40.356 LIB libspdk_scheduler_gscheduler.a 00:01:40.356 LIB libspdk_scheduler_dpdk_governor.a 00:01:40.356 SO libspdk_scheduler_gscheduler.so.4.0 00:01:40.356 SO libspdk_keyring_file.so.1.0 00:01:40.356 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:40.356 LIB libspdk_accel_error.a 00:01:40.356 LIB libspdk_accel_ioat.a 00:01:40.356 LIB libspdk_scheduler_dynamic.a 00:01:40.356 LIB libspdk_accel_iaa.a 00:01:40.356 SO libspdk_accel_error.so.2.0 00:01:40.356 SO libspdk_scheduler_dynamic.so.4.0 00:01:40.356 SO libspdk_accel_ioat.so.6.0 00:01:40.356 SYMLINK libspdk_scheduler_gscheduler.so 00:01:40.356 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:40.356 SYMLINK libspdk_keyring_file.so 00:01:40.356 SO libspdk_accel_iaa.so.3.0 00:01:40.356 LIB libspdk_accel_dsa.a 00:01:40.356 SYMLINK libspdk_scheduler_dynamic.so 00:01:40.356 LIB libspdk_blob_bdev.a 00:01:40.356 SYMLINK libspdk_accel_error.so 00:01:40.356 SO libspdk_accel_dsa.so.5.0 00:01:40.356 SYMLINK libspdk_accel_ioat.so 00:01:40.356 SYMLINK libspdk_accel_iaa.so 00:01:40.356 SO libspdk_blob_bdev.so.11.0 00:01:40.356 SYMLINK libspdk_accel_dsa.so 00:01:40.356 SYMLINK libspdk_blob_bdev.so 00:01:40.615 LIB libspdk_vfu_device.a 00:01:40.615 SO libspdk_vfu_device.so.3.0 00:01:40.615 CC module/bdev/iscsi/bdev_iscsi.o 00:01:40.615 CC module/bdev/delay/vbdev_delay.o 00:01:40.615 CC module/bdev/passthru/vbdev_passthru.o 00:01:40.615 CC module/bdev/raid/bdev_raid.o 00:01:40.615 CC module/bdev/null/bdev_null.o 00:01:40.615 CC module/blobfs/bdev/blobfs_bdev.o 00:01:40.615 CC module/bdev/error/vbdev_error.o 00:01:40.615 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:40.615 CC module/bdev/aio/bdev_aio.o 00:01:40.615 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:40.615 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:40.615 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:40.615 CC module/bdev/error/vbdev_error_rpc.o 00:01:40.615 CC module/bdev/ftl/bdev_ftl.o 00:01:40.615 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:40.615 CC module/bdev/null/bdev_null_rpc.o 00:01:40.615 CC module/bdev/aio/bdev_aio_rpc.o 00:01:40.615 CC module/bdev/raid/bdev_raid_rpc.o 00:01:40.615 CC module/bdev/gpt/gpt.o 00:01:40.615 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:40.615 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:40.615 CC module/bdev/split/vbdev_split.o 00:01:40.615 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:40.615 CC module/bdev/lvol/vbdev_lvol.o 00:01:40.615 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:40.615 CC module/bdev/raid/bdev_raid_sb.o 00:01:40.615 CC module/bdev/gpt/vbdev_gpt.o 00:01:40.615 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:40.615 CC module/bdev/nvme/bdev_nvme.o 00:01:40.615 CC module/bdev/raid/raid0.o 00:01:40.615 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:40.615 CC module/bdev/split/vbdev_split_rpc.o 00:01:40.615 CC module/bdev/malloc/bdev_malloc.o 00:01:40.615 CC module/bdev/raid/raid1.o 00:01:40.615 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:40.615 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:40.615 CC module/bdev/raid/concat.o 00:01:40.615 CC module/bdev/nvme/nvme_rpc.o 00:01:40.615 CC module/bdev/nvme/bdev_mdns_client.o 00:01:40.615 CC module/bdev/nvme/vbdev_opal.o 00:01:40.615 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:40.615 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:40.875 SYMLINK libspdk_vfu_device.so 00:01:40.875 LIB libspdk_sock_posix.a 00:01:41.133 SO libspdk_sock_posix.so.6.0 00:01:41.133 LIB libspdk_blobfs_bdev.a 00:01:41.133 SO libspdk_blobfs_bdev.so.6.0 00:01:41.133 LIB libspdk_bdev_aio.a 00:01:41.133 LIB libspdk_bdev_split.a 00:01:41.133 SYMLINK libspdk_sock_posix.so 00:01:41.133 LIB libspdk_bdev_null.a 00:01:41.133 LIB libspdk_bdev_error.a 00:01:41.133 SO libspdk_bdev_aio.so.6.0 00:01:41.133 SO libspdk_bdev_split.so.6.0 00:01:41.133 LIB libspdk_bdev_gpt.a 00:01:41.133 SO libspdk_bdev_null.so.6.0 00:01:41.133 SYMLINK libspdk_blobfs_bdev.so 00:01:41.133 SO libspdk_bdev_error.so.6.0 00:01:41.133 SO libspdk_bdev_gpt.so.6.0 00:01:41.133 LIB libspdk_bdev_ftl.a 00:01:41.133 SYMLINK libspdk_bdev_aio.so 00:01:41.133 SYMLINK libspdk_bdev_split.so 00:01:41.133 SO libspdk_bdev_ftl.so.6.0 00:01:41.133 LIB libspdk_bdev_passthru.a 00:01:41.133 SYMLINK libspdk_bdev_null.so 00:01:41.133 LIB libspdk_bdev_zone_block.a 00:01:41.133 LIB libspdk_bdev_delay.a 00:01:41.133 SYMLINK libspdk_bdev_error.so 00:01:41.133 SYMLINK libspdk_bdev_gpt.so 00:01:41.133 SO libspdk_bdev_passthru.so.6.0 00:01:41.391 SO libspdk_bdev_zone_block.so.6.0 00:01:41.391 LIB libspdk_bdev_malloc.a 00:01:41.391 SO libspdk_bdev_delay.so.6.0 00:01:41.391 LIB libspdk_bdev_iscsi.a 00:01:41.391 SYMLINK libspdk_bdev_ftl.so 00:01:41.391 SO libspdk_bdev_malloc.so.6.0 00:01:41.391 SO libspdk_bdev_iscsi.so.6.0 00:01:41.391 SYMLINK libspdk_bdev_passthru.so 00:01:41.391 SYMLINK libspdk_bdev_zone_block.so 00:01:41.391 SYMLINK libspdk_bdev_delay.so 00:01:41.391 SYMLINK libspdk_bdev_malloc.so 00:01:41.391 SYMLINK libspdk_bdev_iscsi.so 00:01:41.391 LIB libspdk_bdev_lvol.a 00:01:41.391 SO libspdk_bdev_lvol.so.6.0 00:01:41.391 LIB libspdk_bdev_virtio.a 00:01:41.391 SYMLINK libspdk_bdev_lvol.so 00:01:41.391 SO libspdk_bdev_virtio.so.6.0 00:01:41.649 SYMLINK libspdk_bdev_virtio.so 00:01:41.649 LIB libspdk_bdev_raid.a 00:01:41.649 SO libspdk_bdev_raid.so.6.0 00:01:41.907 SYMLINK libspdk_bdev_raid.so 00:01:42.844 LIB libspdk_bdev_nvme.a 00:01:43.103 SO libspdk_bdev_nvme.so.7.0 00:01:43.103 SYMLINK libspdk_bdev_nvme.so 00:01:43.361 CC module/event/subsystems/iobuf/iobuf.o 00:01:43.361 CC module/event/subsystems/scheduler/scheduler.o 00:01:43.361 CC module/event/subsystems/sock/sock.o 00:01:43.361 CC module/event/subsystems/keyring/keyring.o 00:01:43.361 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:43.361 CC module/event/subsystems/vmd/vmd.o 00:01:43.361 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:43.361 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:43.361 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:43.620 LIB libspdk_event_sock.a 00:01:43.620 LIB libspdk_event_keyring.a 00:01:43.620 LIB libspdk_event_vhost_blk.a 00:01:43.620 LIB libspdk_event_vfu_tgt.a 00:01:43.620 LIB libspdk_event_scheduler.a 00:01:43.620 LIB libspdk_event_vmd.a 00:01:43.620 SO libspdk_event_sock.so.5.0 00:01:43.620 SO libspdk_event_keyring.so.1.0 00:01:43.620 LIB libspdk_event_iobuf.a 00:01:43.620 SO libspdk_event_vhost_blk.so.3.0 00:01:43.620 SO libspdk_event_vfu_tgt.so.3.0 00:01:43.620 SO libspdk_event_scheduler.so.4.0 00:01:43.620 SO libspdk_event_vmd.so.6.0 00:01:43.620 SO libspdk_event_iobuf.so.3.0 00:01:43.620 SYMLINK libspdk_event_sock.so 00:01:43.620 SYMLINK libspdk_event_keyring.so 00:01:43.620 SYMLINK libspdk_event_vhost_blk.so 00:01:43.620 SYMLINK libspdk_event_vfu_tgt.so 00:01:43.621 SYMLINK libspdk_event_scheduler.so 00:01:43.621 SYMLINK libspdk_event_vmd.so 00:01:43.621 SYMLINK libspdk_event_iobuf.so 00:01:43.879 CC module/event/subsystems/accel/accel.o 00:01:44.138 LIB libspdk_event_accel.a 00:01:44.138 SO libspdk_event_accel.so.6.0 00:01:44.138 SYMLINK libspdk_event_accel.so 00:01:44.396 CC module/event/subsystems/bdev/bdev.o 00:01:44.396 LIB libspdk_event_bdev.a 00:01:44.396 SO libspdk_event_bdev.so.6.0 00:01:44.654 SYMLINK libspdk_event_bdev.so 00:01:44.654 CC module/event/subsystems/ublk/ublk.o 00:01:44.654 CC module/event/subsystems/nbd/nbd.o 00:01:44.654 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:44.654 CC module/event/subsystems/scsi/scsi.o 00:01:44.654 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:44.912 LIB libspdk_event_ublk.a 00:01:44.912 LIB libspdk_event_nbd.a 00:01:44.912 LIB libspdk_event_scsi.a 00:01:44.912 SO libspdk_event_nbd.so.6.0 00:01:44.912 SO libspdk_event_ublk.so.3.0 00:01:44.912 SO libspdk_event_scsi.so.6.0 00:01:44.912 SYMLINK libspdk_event_ublk.so 00:01:44.912 SYMLINK libspdk_event_nbd.so 00:01:44.912 SYMLINK libspdk_event_scsi.so 00:01:44.912 LIB libspdk_event_nvmf.a 00:01:44.912 SO libspdk_event_nvmf.so.6.0 00:01:45.170 SYMLINK libspdk_event_nvmf.so 00:01:45.170 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:45.170 CC module/event/subsystems/iscsi/iscsi.o 00:01:45.170 LIB libspdk_event_vhost_scsi.a 00:01:45.170 LIB libspdk_event_iscsi.a 00:01:45.170 SO libspdk_event_vhost_scsi.so.3.0 00:01:45.429 SO libspdk_event_iscsi.so.6.0 00:01:45.429 SYMLINK libspdk_event_vhost_scsi.so 00:01:45.429 SYMLINK libspdk_event_iscsi.so 00:01:45.429 SO libspdk.so.6.0 00:01:45.429 SYMLINK libspdk.so 00:01:45.690 CC app/spdk_lspci/spdk_lspci.o 00:01:45.690 CXX app/trace/trace.o 00:01:45.690 TEST_HEADER include/spdk/accel.h 00:01:45.690 TEST_HEADER include/spdk/accel_module.h 00:01:45.690 TEST_HEADER include/spdk/assert.h 00:01:45.690 TEST_HEADER include/spdk/barrier.h 00:01:45.690 TEST_HEADER include/spdk/base64.h 00:01:45.690 TEST_HEADER include/spdk/bdev.h 00:01:45.690 CC app/spdk_nvme_identify/identify.o 00:01:45.690 CC app/trace_record/trace_record.o 00:01:45.690 TEST_HEADER include/spdk/bdev_module.h 00:01:45.690 CC app/spdk_nvme_perf/perf.o 00:01:45.690 CC app/spdk_top/spdk_top.o 00:01:45.690 CC test/rpc_client/rpc_client_test.o 00:01:45.690 TEST_HEADER include/spdk/bdev_zone.h 00:01:45.690 CC app/spdk_nvme_discover/discovery_aer.o 00:01:45.690 TEST_HEADER include/spdk/bit_array.h 00:01:45.690 TEST_HEADER include/spdk/bit_pool.h 00:01:45.690 TEST_HEADER include/spdk/blob_bdev.h 00:01:45.690 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:45.690 TEST_HEADER include/spdk/blobfs.h 00:01:45.690 TEST_HEADER include/spdk/blob.h 00:01:45.690 TEST_HEADER include/spdk/conf.h 00:01:45.690 TEST_HEADER include/spdk/config.h 00:01:45.690 TEST_HEADER include/spdk/cpuset.h 00:01:45.690 TEST_HEADER include/spdk/crc16.h 00:01:45.690 TEST_HEADER include/spdk/crc32.h 00:01:45.690 TEST_HEADER include/spdk/crc64.h 00:01:45.690 TEST_HEADER include/spdk/dif.h 00:01:45.690 TEST_HEADER include/spdk/dma.h 00:01:45.690 CC app/spdk_dd/spdk_dd.o 00:01:45.690 TEST_HEADER include/spdk/endian.h 00:01:45.690 TEST_HEADER include/spdk/env_dpdk.h 00:01:45.690 TEST_HEADER include/spdk/env.h 00:01:45.690 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:45.690 TEST_HEADER include/spdk/event.h 00:01:45.690 CC app/iscsi_tgt/iscsi_tgt.o 00:01:45.690 CC app/nvmf_tgt/nvmf_main.o 00:01:45.690 TEST_HEADER include/spdk/fd_group.h 00:01:45.690 TEST_HEADER include/spdk/fd.h 00:01:45.690 CC app/vhost/vhost.o 00:01:45.690 TEST_HEADER include/spdk/file.h 00:01:45.690 TEST_HEADER include/spdk/ftl.h 00:01:45.690 TEST_HEADER include/spdk/gpt_spec.h 00:01:45.690 TEST_HEADER include/spdk/hexlify.h 00:01:45.690 TEST_HEADER include/spdk/histogram_data.h 00:01:45.690 TEST_HEADER include/spdk/idxd.h 00:01:45.690 TEST_HEADER include/spdk/idxd_spec.h 00:01:45.967 TEST_HEADER include/spdk/init.h 00:01:45.967 TEST_HEADER include/spdk/ioat.h 00:01:45.967 TEST_HEADER include/spdk/ioat_spec.h 00:01:45.967 TEST_HEADER include/spdk/iscsi_spec.h 00:01:45.967 TEST_HEADER include/spdk/json.h 00:01:45.967 TEST_HEADER include/spdk/jsonrpc.h 00:01:45.967 CC examples/vmd/lsvmd/lsvmd.o 00:01:45.967 TEST_HEADER include/spdk/keyring.h 00:01:45.967 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:45.967 CC examples/nvme/reconnect/reconnect.o 00:01:45.967 TEST_HEADER include/spdk/keyring_module.h 00:01:45.967 CC examples/vmd/led/led.o 00:01:45.967 CC test/app/histogram_perf/histogram_perf.o 00:01:45.967 CC app/spdk_tgt/spdk_tgt.o 00:01:45.967 TEST_HEADER include/spdk/likely.h 00:01:45.967 CC app/fio/nvme/fio_plugin.o 00:01:45.967 CC test/event/event_perf/event_perf.o 00:01:45.967 TEST_HEADER include/spdk/log.h 00:01:45.967 CC examples/nvme/hello_world/hello_world.o 00:01:45.967 CC examples/sock/hello_world/hello_sock.o 00:01:45.967 CC examples/util/zipf/zipf.o 00:01:45.967 CC test/app/jsoncat/jsoncat.o 00:01:45.967 CC examples/ioat/perf/perf.o 00:01:45.967 CC test/env/vtophys/vtophys.o 00:01:45.967 TEST_HEADER include/spdk/lvol.h 00:01:45.967 CC test/thread/poller_perf/poller_perf.o 00:01:45.967 TEST_HEADER include/spdk/memory.h 00:01:45.967 CC test/event/reactor/reactor.o 00:01:45.967 CC examples/accel/perf/accel_perf.o 00:01:45.967 TEST_HEADER include/spdk/mmio.h 00:01:45.967 CC test/event/reactor_perf/reactor_perf.o 00:01:45.967 CC examples/nvme/arbitration/arbitration.o 00:01:45.967 TEST_HEADER include/spdk/nbd.h 00:01:45.967 CC test/nvme/aer/aer.o 00:01:45.967 TEST_HEADER include/spdk/notify.h 00:01:45.967 TEST_HEADER include/spdk/nvme.h 00:01:45.967 CC examples/idxd/perf/perf.o 00:01:45.967 TEST_HEADER include/spdk/nvme_intel.h 00:01:45.967 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:45.967 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:45.967 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:45.967 TEST_HEADER include/spdk/nvme_spec.h 00:01:45.967 TEST_HEADER include/spdk/nvme_zns.h 00:01:45.967 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:45.967 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:45.967 TEST_HEADER include/spdk/nvmf.h 00:01:45.967 TEST_HEADER include/spdk/nvmf_spec.h 00:01:45.967 CC test/bdev/bdevio/bdevio.o 00:01:45.967 CC examples/bdev/hello_world/hello_bdev.o 00:01:45.967 TEST_HEADER include/spdk/nvmf_transport.h 00:01:45.967 CC test/accel/dif/dif.o 00:01:45.967 TEST_HEADER include/spdk/opal.h 00:01:45.967 CC examples/blob/hello_world/hello_blob.o 00:01:45.967 TEST_HEADER include/spdk/opal_spec.h 00:01:45.967 CC test/dma/test_dma/test_dma.o 00:01:45.967 CC examples/nvmf/nvmf/nvmf.o 00:01:45.967 TEST_HEADER include/spdk/pci_ids.h 00:01:45.967 CC test/app/bdev_svc/bdev_svc.o 00:01:45.967 TEST_HEADER include/spdk/pipe.h 00:01:45.967 CC examples/thread/thread/thread_ex.o 00:01:45.967 TEST_HEADER include/spdk/queue.h 00:01:45.967 TEST_HEADER include/spdk/reduce.h 00:01:45.967 CC test/blobfs/mkfs/mkfs.o 00:01:45.967 TEST_HEADER include/spdk/rpc.h 00:01:45.967 TEST_HEADER include/spdk/scheduler.h 00:01:45.967 TEST_HEADER include/spdk/scsi.h 00:01:45.967 TEST_HEADER include/spdk/scsi_spec.h 00:01:45.967 TEST_HEADER include/spdk/sock.h 00:01:45.967 TEST_HEADER include/spdk/stdinc.h 00:01:45.968 TEST_HEADER include/spdk/string.h 00:01:45.968 TEST_HEADER include/spdk/thread.h 00:01:45.968 TEST_HEADER include/spdk/trace.h 00:01:45.968 TEST_HEADER include/spdk/trace_parser.h 00:01:45.968 TEST_HEADER include/spdk/tree.h 00:01:45.968 TEST_HEADER include/spdk/ublk.h 00:01:45.968 CC test/env/mem_callbacks/mem_callbacks.o 00:01:45.968 TEST_HEADER include/spdk/util.h 00:01:45.968 LINK spdk_lspci 00:01:45.968 TEST_HEADER include/spdk/uuid.h 00:01:45.968 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:45.968 TEST_HEADER include/spdk/version.h 00:01:45.968 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:45.968 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:45.968 TEST_HEADER include/spdk/vhost.h 00:01:45.968 TEST_HEADER include/spdk/vmd.h 00:01:45.968 TEST_HEADER include/spdk/xor.h 00:01:45.968 CC test/lvol/esnap/esnap.o 00:01:45.968 TEST_HEADER include/spdk/zipf.h 00:01:45.968 CXX test/cpp_headers/accel.o 00:01:46.236 LINK rpc_client_test 00:01:46.236 LINK lsvmd 00:01:46.236 LINK spdk_nvme_discover 00:01:46.236 LINK jsoncat 00:01:46.236 LINK histogram_perf 00:01:46.236 LINK interrupt_tgt 00:01:46.236 LINK led 00:01:46.236 LINK reactor_perf 00:01:46.236 LINK reactor 00:01:46.236 LINK nvmf_tgt 00:01:46.236 LINK event_perf 00:01:46.236 LINK zipf 00:01:46.236 LINK vtophys 00:01:46.236 LINK poller_perf 00:01:46.236 LINK vhost 00:01:46.236 LINK spdk_trace_record 00:01:46.236 LINK iscsi_tgt 00:01:46.236 LINK env_dpdk_post_init 00:01:46.236 LINK bdev_svc 00:01:46.236 LINK spdk_tgt 00:01:46.236 LINK ioat_perf 00:01:46.236 LINK hello_world 00:01:46.236 LINK mkfs 00:01:46.236 CXX test/cpp_headers/accel_module.o 00:01:46.510 LINK hello_sock 00:01:46.510 LINK hello_bdev 00:01:46.510 LINK hello_blob 00:01:46.510 LINK aer 00:01:46.510 LINK thread 00:01:46.510 LINK spdk_dd 00:01:46.510 LINK nvmf 00:01:46.510 LINK arbitration 00:01:46.510 CXX test/cpp_headers/assert.o 00:01:46.510 LINK reconnect 00:01:46.510 LINK spdk_trace 00:01:46.510 LINK idxd_perf 00:01:46.510 CC examples/nvme/hotplug/hotplug.o 00:01:46.510 CXX test/cpp_headers/barrier.o 00:01:46.510 LINK dif 00:01:46.780 LINK bdevio 00:01:46.780 CXX test/cpp_headers/base64.o 00:01:46.780 LINK test_dma 00:01:46.780 CC test/env/memory/memory_ut.o 00:01:46.780 CC test/env/pci/pci_ut.o 00:01:46.780 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:46.780 CC test/app/stub/stub.o 00:01:46.780 CC examples/blob/cli/blobcli.o 00:01:46.780 CXX test/cpp_headers/bdev.o 00:01:46.780 CC examples/bdev/bdevperf/bdevperf.o 00:01:46.780 CC examples/nvme/abort/abort.o 00:01:46.780 CC app/fio/bdev/fio_plugin.o 00:01:46.780 CC test/nvme/reset/reset.o 00:01:46.780 CC test/event/app_repeat/app_repeat.o 00:01:46.780 CC test/nvme/sgl/sgl.o 00:01:46.780 CC test/event/scheduler/scheduler.o 00:01:46.780 LINK nvme_fuzz 00:01:46.780 LINK accel_perf 00:01:46.780 LINK nvme_manage 00:01:46.780 CC examples/ioat/verify/verify.o 00:01:46.780 CXX test/cpp_headers/bdev_module.o 00:01:46.780 CC test/nvme/overhead/overhead.o 00:01:46.780 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:46.780 CXX test/cpp_headers/bdev_zone.o 00:01:46.780 CC test/nvme/err_injection/err_injection.o 00:01:46.780 CC test/nvme/e2edp/nvme_dp.o 00:01:46.780 CXX test/cpp_headers/bit_array.o 00:01:46.780 CXX test/cpp_headers/bit_pool.o 00:01:47.042 CXX test/cpp_headers/blob_bdev.o 00:01:47.042 LINK spdk_nvme 00:01:47.042 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:47.042 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:47.042 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:47.042 CXX test/cpp_headers/blobfs_bdev.o 00:01:47.042 CXX test/cpp_headers/blobfs.o 00:01:47.042 CXX test/cpp_headers/blob.o 00:01:47.042 CC test/nvme/startup/startup.o 00:01:47.042 CXX test/cpp_headers/conf.o 00:01:47.042 CC test/nvme/reserve/reserve.o 00:01:47.042 CXX test/cpp_headers/config.o 00:01:47.042 LINK hotplug 00:01:47.042 LINK cmb_copy 00:01:47.042 LINK app_repeat 00:01:47.042 CXX test/cpp_headers/cpuset.o 00:01:47.042 CXX test/cpp_headers/crc16.o 00:01:47.042 LINK stub 00:01:47.042 CC test/nvme/simple_copy/simple_copy.o 00:01:47.042 CXX test/cpp_headers/crc32.o 00:01:47.042 CC test/nvme/connect_stress/connect_stress.o 00:01:47.306 CC test/nvme/boot_partition/boot_partition.o 00:01:47.306 LINK mem_callbacks 00:01:47.306 CXX test/cpp_headers/crc64.o 00:01:47.306 CXX test/cpp_headers/dif.o 00:01:47.306 CC test/nvme/compliance/nvme_compliance.o 00:01:47.306 LINK scheduler 00:01:47.306 CC test/nvme/fused_ordering/fused_ordering.o 00:01:47.306 CXX test/cpp_headers/dma.o 00:01:47.306 LINK err_injection 00:01:47.306 LINK spdk_nvme_perf 00:01:47.306 LINK pmr_persistence 00:01:47.306 LINK verify 00:01:47.306 LINK reset 00:01:47.306 CXX test/cpp_headers/endian.o 00:01:47.306 CXX test/cpp_headers/env_dpdk.o 00:01:47.306 CXX test/cpp_headers/env.o 00:01:47.306 LINK sgl 00:01:47.306 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:47.306 CXX test/cpp_headers/event.o 00:01:47.306 LINK startup 00:01:47.306 CXX test/cpp_headers/fd_group.o 00:01:47.306 CXX test/cpp_headers/fd.o 00:01:47.306 CXX test/cpp_headers/file.o 00:01:47.306 LINK spdk_nvme_identify 00:01:47.306 CC test/nvme/fdp/fdp.o 00:01:47.569 CC test/nvme/cuse/cuse.o 00:01:47.569 LINK overhead 00:01:47.569 LINK spdk_top 00:01:47.569 CXX test/cpp_headers/ftl.o 00:01:47.569 CXX test/cpp_headers/gpt_spec.o 00:01:47.569 CXX test/cpp_headers/hexlify.o 00:01:47.569 CXX test/cpp_headers/histogram_data.o 00:01:47.569 LINK nvme_dp 00:01:47.569 LINK pci_ut 00:01:47.569 CXX test/cpp_headers/idxd.o 00:01:47.569 CXX test/cpp_headers/idxd_spec.o 00:01:47.569 LINK abort 00:01:47.569 LINK reserve 00:01:47.569 CXX test/cpp_headers/init.o 00:01:47.569 CXX test/cpp_headers/ioat.o 00:01:47.569 LINK boot_partition 00:01:47.569 CXX test/cpp_headers/ioat_spec.o 00:01:47.569 LINK connect_stress 00:01:47.569 CXX test/cpp_headers/iscsi_spec.o 00:01:47.569 CXX test/cpp_headers/json.o 00:01:47.569 LINK simple_copy 00:01:47.569 CXX test/cpp_headers/jsonrpc.o 00:01:47.569 CXX test/cpp_headers/keyring.o 00:01:47.569 CXX test/cpp_headers/keyring_module.o 00:01:47.569 LINK spdk_bdev 00:01:47.569 CXX test/cpp_headers/likely.o 00:01:47.569 LINK fused_ordering 00:01:47.569 CXX test/cpp_headers/log.o 00:01:47.837 LINK blobcli 00:01:47.837 CXX test/cpp_headers/lvol.o 00:01:47.838 CXX test/cpp_headers/memory.o 00:01:47.838 CXX test/cpp_headers/mmio.o 00:01:47.838 CXX test/cpp_headers/nbd.o 00:01:47.838 CXX test/cpp_headers/notify.o 00:01:47.838 CXX test/cpp_headers/nvme.o 00:01:47.838 CXX test/cpp_headers/nvme_intel.o 00:01:47.838 CXX test/cpp_headers/nvme_ocssd.o 00:01:47.838 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:47.838 LINK vhost_fuzz 00:01:47.838 CXX test/cpp_headers/nvme_spec.o 00:01:47.838 LINK doorbell_aers 00:01:47.838 CXX test/cpp_headers/nvme_zns.o 00:01:47.838 CXX test/cpp_headers/nvmf_cmd.o 00:01:47.838 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:47.838 CXX test/cpp_headers/nvmf.o 00:01:47.838 CXX test/cpp_headers/nvmf_spec.o 00:01:47.838 CXX test/cpp_headers/opal.o 00:01:47.838 CXX test/cpp_headers/nvmf_transport.o 00:01:47.838 CXX test/cpp_headers/opal_spec.o 00:01:47.838 CXX test/cpp_headers/pci_ids.o 00:01:47.838 CXX test/cpp_headers/pipe.o 00:01:47.838 CXX test/cpp_headers/queue.o 00:01:47.838 CXX test/cpp_headers/reduce.o 00:01:47.838 LINK nvme_compliance 00:01:47.838 CXX test/cpp_headers/rpc.o 00:01:47.838 CXX test/cpp_headers/scheduler.o 00:01:47.838 CXX test/cpp_headers/scsi.o 00:01:47.838 CXX test/cpp_headers/scsi_spec.o 00:01:47.838 CXX test/cpp_headers/sock.o 00:01:47.838 CXX test/cpp_headers/stdinc.o 00:01:47.838 CXX test/cpp_headers/string.o 00:01:47.838 CXX test/cpp_headers/thread.o 00:01:48.096 CXX test/cpp_headers/trace.o 00:01:48.096 CXX test/cpp_headers/trace_parser.o 00:01:48.096 CXX test/cpp_headers/tree.o 00:01:48.096 CXX test/cpp_headers/ublk.o 00:01:48.096 CXX test/cpp_headers/util.o 00:01:48.096 CXX test/cpp_headers/uuid.o 00:01:48.096 CXX test/cpp_headers/version.o 00:01:48.096 CXX test/cpp_headers/vfio_user_pci.o 00:01:48.096 CXX test/cpp_headers/vfio_user_spec.o 00:01:48.096 CXX test/cpp_headers/vhost.o 00:01:48.096 CXX test/cpp_headers/vmd.o 00:01:48.096 CXX test/cpp_headers/xor.o 00:01:48.097 CXX test/cpp_headers/zipf.o 00:01:48.097 LINK fdp 00:01:48.355 LINK bdevperf 00:01:48.355 LINK memory_ut 00:01:48.921 LINK cuse 00:01:49.180 LINK iscsi_fuzz 00:01:52.463 LINK esnap 00:01:52.463 00:01:52.463 real 0m48.837s 00:01:52.463 user 10m16.853s 00:01:52.463 sys 2m27.234s 00:01:52.463 13:30:55 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:52.463 13:30:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.463 ************************************ 00:01:52.463 END TEST make 00:01:52.463 ************************************ 00:01:52.463 13:30:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:52.463 13:30:55 -- pm/common@30 -- $ signal_monitor_resources TERM 00:01:52.463 13:30:55 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:01:52.463 13:30:55 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.463 13:30:55 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:52.463 13:30:55 -- pm/common@45 -- $ pid=2396941 00:01:52.463 13:30:55 -- pm/common@52 -- $ sudo kill -TERM 2396941 00:01:52.463 13:30:55 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.463 13:30:55 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:52.464 13:30:55 -- pm/common@45 -- $ pid=2396940 00:01:52.464 13:30:55 -- pm/common@52 -- $ sudo kill -TERM 2396940 00:01:52.464 13:30:55 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.464 13:30:55 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:52.464 13:30:55 -- pm/common@45 -- $ pid=2396942 00:01:52.464 13:30:55 -- pm/common@52 -- $ sudo kill -TERM 2396942 00:01:52.464 13:30:55 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.464 13:30:55 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:52.464 13:30:55 -- pm/common@45 -- $ pid=2396943 00:01:52.464 13:30:55 -- pm/common@52 -- $ sudo kill -TERM 2396943 00:01:52.464 13:30:55 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:52.464 13:30:55 -- nvmf/common.sh@7 -- # uname -s 00:01:52.464 13:30:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:52.464 13:30:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:52.464 13:30:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:52.464 13:30:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:52.464 13:30:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:52.464 13:30:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:52.464 13:30:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:52.464 13:30:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:52.464 13:30:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:52.464 13:30:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:52.464 13:30:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:01:52.464 13:30:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:01:52.464 13:30:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:52.464 13:30:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:52.464 13:30:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:52.464 13:30:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:52.464 13:30:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:52.464 13:30:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:52.464 13:30:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:52.464 13:30:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:52.464 13:30:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.464 13:30:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.464 13:30:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.464 13:30:55 -- paths/export.sh@5 -- # export PATH 00:01:52.464 13:30:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.464 13:30:55 -- nvmf/common.sh@47 -- # : 0 00:01:52.464 13:30:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:52.464 13:30:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:52.464 13:30:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:52.464 13:30:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:52.464 13:30:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:52.464 13:30:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:52.464 13:30:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:52.464 13:30:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:52.464 13:30:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:52.464 13:30:55 -- spdk/autotest.sh@32 -- # uname -s 00:01:52.464 13:30:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:52.464 13:30:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:52.464 13:30:55 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:52.464 13:30:55 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:52.464 13:30:55 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:52.464 13:30:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:52.464 13:30:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:52.464 13:30:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:52.464 13:30:55 -- spdk/autotest.sh@48 -- # udevadm_pid=2452814 00:01:52.464 13:30:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:52.464 13:30:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:52.464 13:30:55 -- pm/common@17 -- # local monitor 00:01:52.464 13:30:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.464 13:30:55 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2452817 00:01:52.464 13:30:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.464 13:30:55 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2452819 00:01:52.464 13:30:55 -- pm/common@21 -- # date +%s 00:01:52.464 13:30:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.464 13:30:55 -- pm/common@21 -- # date +%s 00:01:52.464 13:30:55 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2452823 00:01:52.464 13:30:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.464 13:30:55 -- pm/common@21 -- # date +%s 00:01:52.464 13:30:55 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2452827 00:01:52.464 13:30:55 -- pm/common@26 -- # sleep 1 00:01:52.464 13:30:55 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713439855 00:01:52.464 13:30:55 -- pm/common@21 -- # date +%s 00:01:52.464 13:30:55 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713439855 00:01:52.464 13:30:55 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713439855 00:01:52.464 13:30:55 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713439855 00:01:52.722 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713439855_collect-vmstat.pm.log 00:01:52.722 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713439855_collect-bmc-pm.bmc.pm.log 00:01:52.722 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713439855_collect-cpu-load.pm.log 00:01:52.722 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713439855_collect-cpu-temp.pm.log 00:01:53.690 13:30:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:53.690 13:30:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:53.690 13:30:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:53.690 13:30:56 -- common/autotest_common.sh@10 -- # set +x 00:01:53.690 13:30:56 -- spdk/autotest.sh@59 -- # create_test_list 00:01:53.690 13:30:56 -- common/autotest_common.sh@734 -- # xtrace_disable 00:01:53.690 13:30:56 -- common/autotest_common.sh@10 -- # set +x 00:01:53.690 13:30:56 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:53.690 13:30:56 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:53.690 13:30:56 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:53.690 13:30:56 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:53.690 13:30:56 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:53.690 13:30:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:53.690 13:30:56 -- common/autotest_common.sh@1441 -- # uname 00:01:53.690 13:30:56 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:01:53.690 13:30:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:53.690 13:30:56 -- common/autotest_common.sh@1461 -- # uname 00:01:53.690 13:30:56 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:01:53.690 13:30:56 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:53.690 13:30:56 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:53.690 13:30:56 -- spdk/autotest.sh@72 -- # hash lcov 00:01:53.690 13:30:56 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:53.690 13:30:56 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:53.690 --rc lcov_branch_coverage=1 00:01:53.690 --rc lcov_function_coverage=1 00:01:53.690 --rc genhtml_branch_coverage=1 00:01:53.690 --rc genhtml_function_coverage=1 00:01:53.690 --rc genhtml_legend=1 00:01:53.690 --rc geninfo_all_blocks=1 00:01:53.690 ' 00:01:53.690 13:30:56 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:53.690 --rc lcov_branch_coverage=1 00:01:53.690 --rc lcov_function_coverage=1 00:01:53.690 --rc genhtml_branch_coverage=1 00:01:53.690 --rc genhtml_function_coverage=1 00:01:53.690 --rc genhtml_legend=1 00:01:53.690 --rc geninfo_all_blocks=1 00:01:53.690 ' 00:01:53.690 13:30:56 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:53.690 --rc lcov_branch_coverage=1 00:01:53.690 --rc lcov_function_coverage=1 00:01:53.690 --rc genhtml_branch_coverage=1 00:01:53.690 --rc genhtml_function_coverage=1 00:01:53.690 --rc genhtml_legend=1 00:01:53.690 --rc geninfo_all_blocks=1 00:01:53.690 --no-external' 00:01:53.690 13:30:56 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:53.690 --rc lcov_branch_coverage=1 00:01:53.690 --rc lcov_function_coverage=1 00:01:53.690 --rc genhtml_branch_coverage=1 00:01:53.690 --rc genhtml_function_coverage=1 00:01:53.690 --rc genhtml_legend=1 00:01:53.690 --rc geninfo_all_blocks=1 00:01:53.690 --no-external' 00:01:53.690 13:30:56 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:53.690 lcov: LCOV version 1.14 00:01:53.690 13:30:56 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:08.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:08.554 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:08.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:08.554 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:08.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:08.554 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:08.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:08.554 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:26.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:26.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:26.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:26.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:26.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:26.632 13:31:28 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:26.632 13:31:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:26.632 13:31:28 -- common/autotest_common.sh@10 -- # set +x 00:02:26.632 13:31:28 -- spdk/autotest.sh@91 -- # rm -f 00:02:26.632 13:31:28 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:27.200 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:02:27.200 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:27.200 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:27.200 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:27.200 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:27.200 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:27.200 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:27.200 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:27.200 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:27.200 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:27.200 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:27.200 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:27.200 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:27.200 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:27.200 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:27.200 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:27.200 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:27.458 13:31:30 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:27.458 13:31:30 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:27.458 13:31:30 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:27.458 13:31:30 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:27.458 13:31:30 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:27.458 13:31:30 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:27.458 13:31:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:27.458 13:31:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:27.458 13:31:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:27.458 13:31:30 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:27.458 13:31:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:27.458 13:31:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:27.458 13:31:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:27.458 13:31:30 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:27.458 13:31:30 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:27.458 No valid GPT data, bailing 00:02:27.458 13:31:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:27.458 13:31:30 -- scripts/common.sh@391 -- # pt= 00:02:27.458 13:31:30 -- scripts/common.sh@392 -- # return 1 00:02:27.458 13:31:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:27.458 1+0 records in 00:02:27.458 1+0 records out 00:02:27.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00218199 s, 481 MB/s 00:02:27.458 13:31:30 -- spdk/autotest.sh@118 -- # sync 00:02:27.458 13:31:30 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:27.458 13:31:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:27.458 13:31:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:29.358 13:31:31 -- spdk/autotest.sh@124 -- # uname -s 00:02:29.358 13:31:31 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:29.358 13:31:31 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:29.358 13:31:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:29.358 13:31:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:29.358 13:31:31 -- common/autotest_common.sh@10 -- # set +x 00:02:29.359 ************************************ 00:02:29.359 START TEST setup.sh 00:02:29.359 ************************************ 00:02:29.359 13:31:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:29.359 * Looking for test storage... 00:02:29.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:29.359 13:31:32 -- setup/test-setup.sh@10 -- # uname -s 00:02:29.359 13:31:32 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:29.359 13:31:32 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:29.359 13:31:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:29.359 13:31:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:29.359 13:31:32 -- common/autotest_common.sh@10 -- # set +x 00:02:29.359 ************************************ 00:02:29.359 START TEST acl 00:02:29.359 ************************************ 00:02:29.359 13:31:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:29.616 * Looking for test storage... 00:02:29.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:29.616 13:31:32 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:29.616 13:31:32 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:29.616 13:31:32 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:29.616 13:31:32 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:29.616 13:31:32 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:29.616 13:31:32 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:29.616 13:31:32 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:29.616 13:31:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:29.616 13:31:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:29.616 13:31:32 -- setup/acl.sh@12 -- # devs=() 00:02:29.616 13:31:32 -- setup/acl.sh@12 -- # declare -a devs 00:02:29.616 13:31:32 -- setup/acl.sh@13 -- # drivers=() 00:02:29.616 13:31:32 -- setup/acl.sh@13 -- # declare -A drivers 00:02:29.616 13:31:32 -- setup/acl.sh@51 -- # setup reset 00:02:29.616 13:31:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:29.616 13:31:32 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:30.988 13:31:33 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:30.988 13:31:33 -- setup/acl.sh@16 -- # local dev driver 00:02:30.988 13:31:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.988 13:31:33 -- setup/acl.sh@15 -- # setup output status 00:02:30.988 13:31:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.988 13:31:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:31.922 Hugepages 00:02:31.922 node hugesize free / total 00:02:31.922 13:31:34 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:31.922 13:31:34 -- setup/acl.sh@19 -- # continue 00:02:31.922 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.922 13:31:34 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:31.922 13:31:34 -- setup/acl.sh@19 -- # continue 00:02:31.922 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.922 13:31:34 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:31.922 13:31:34 -- setup/acl.sh@19 -- # continue 00:02:31.922 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.922 00:02:31.922 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:31.922 13:31:34 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:31.922 13:31:34 -- setup/acl.sh@19 -- # continue 00:02:31.922 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.922 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:31.922 13:31:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.922 13:31:34 -- setup/acl.sh@20 -- # continue 00:02:31.922 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.922 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:31.922 13:31:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.922 13:31:34 -- setup/acl.sh@20 -- # continue 00:02:31.922 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.922 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:31.922 13:31:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.922 13:31:34 -- setup/acl.sh@20 -- # continue 00:02:31.922 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.922 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # continue 00:02:31.923 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.923 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # continue 00:02:31.923 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.923 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # continue 00:02:31.923 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.923 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # continue 00:02:31.923 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.923 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # continue 00:02:31.923 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.923 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # continue 00:02:31.923 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.923 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # continue 00:02:31.923 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.923 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # continue 00:02:31.923 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.923 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # continue 00:02:31.923 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.923 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # continue 00:02:31.923 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.923 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # continue 00:02:31.923 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.923 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # continue 00:02:31.923 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.923 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # continue 00:02:31.923 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.923 13:31:34 -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:02:31.923 13:31:34 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:31.923 13:31:34 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:02:31.923 13:31:34 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:31.923 13:31:34 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:31.923 13:31:34 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.923 13:31:34 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:31.923 13:31:34 -- setup/acl.sh@54 -- # run_test denied denied 00:02:31.923 13:31:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:31.923 13:31:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:31.923 13:31:34 -- common/autotest_common.sh@10 -- # set +x 00:02:32.191 ************************************ 00:02:32.191 START TEST denied 00:02:32.191 ************************************ 00:02:32.191 13:31:34 -- common/autotest_common.sh@1111 -- # denied 00:02:32.191 13:31:34 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:02:32.191 13:31:34 -- setup/acl.sh@38 -- # setup output config 00:02:32.191 13:31:34 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:02:32.191 13:31:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:32.191 13:31:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:33.584 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:02:33.584 13:31:36 -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:02:33.584 13:31:36 -- setup/acl.sh@28 -- # local dev driver 00:02:33.584 13:31:36 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:33.585 13:31:36 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:02:33.585 13:31:36 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:02:33.585 13:31:36 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:33.585 13:31:36 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:33.585 13:31:36 -- setup/acl.sh@41 -- # setup reset 00:02:33.585 13:31:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:33.585 13:31:36 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:36.119 00:02:36.119 real 0m3.525s 00:02:36.119 user 0m1.080s 00:02:36.119 sys 0m1.660s 00:02:36.119 13:31:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:36.119 13:31:38 -- common/autotest_common.sh@10 -- # set +x 00:02:36.119 ************************************ 00:02:36.119 END TEST denied 00:02:36.119 ************************************ 00:02:36.119 13:31:38 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:36.119 13:31:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:36.119 13:31:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:36.119 13:31:38 -- common/autotest_common.sh@10 -- # set +x 00:02:36.119 ************************************ 00:02:36.119 START TEST allowed 00:02:36.119 ************************************ 00:02:36.119 13:31:38 -- common/autotest_common.sh@1111 -- # allowed 00:02:36.119 13:31:38 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:02:36.119 13:31:38 -- setup/acl.sh@45 -- # setup output config 00:02:36.119 13:31:38 -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:02:36.119 13:31:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.119 13:31:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:38.026 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:02:38.026 13:31:40 -- setup/acl.sh@47 -- # verify 00:02:38.026 13:31:40 -- setup/acl.sh@28 -- # local dev driver 00:02:38.026 13:31:40 -- setup/acl.sh@48 -- # setup reset 00:02:38.026 13:31:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:38.026 13:31:40 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:39.404 00:02:39.404 real 0m3.644s 00:02:39.404 user 0m0.966s 00:02:39.404 sys 0m1.621s 00:02:39.404 13:31:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:39.404 13:31:42 -- common/autotest_common.sh@10 -- # set +x 00:02:39.404 ************************************ 00:02:39.404 END TEST allowed 00:02:39.404 ************************************ 00:02:39.404 00:02:39.404 real 0m9.951s 00:02:39.404 user 0m3.167s 00:02:39.404 sys 0m5.002s 00:02:39.404 13:31:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:39.404 13:31:42 -- common/autotest_common.sh@10 -- # set +x 00:02:39.404 ************************************ 00:02:39.404 END TEST acl 00:02:39.404 ************************************ 00:02:39.404 13:31:42 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:39.404 13:31:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:39.404 13:31:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:39.404 13:31:42 -- common/autotest_common.sh@10 -- # set +x 00:02:39.663 ************************************ 00:02:39.663 START TEST hugepages 00:02:39.663 ************************************ 00:02:39.663 13:31:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:39.663 * Looking for test storage... 00:02:39.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:39.663 13:31:42 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:39.663 13:31:42 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:39.663 13:31:42 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:39.663 13:31:42 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:39.663 13:31:42 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:39.663 13:31:42 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:39.663 13:31:42 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:39.663 13:31:42 -- setup/common.sh@18 -- # local node= 00:02:39.663 13:31:42 -- setup/common.sh@19 -- # local var val 00:02:39.663 13:31:42 -- setup/common.sh@20 -- # local mem_f mem 00:02:39.663 13:31:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.663 13:31:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.663 13:31:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.663 13:31:42 -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.663 13:31:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 25616404 kB' 'MemAvailable: 29363096 kB' 'Buffers: 2696 kB' 'Cached: 11786516 kB' 'SwapCached: 0 kB' 'Active: 8768740 kB' 'Inactive: 3494528 kB' 'Active(anon): 8198444 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477236 kB' 'Mapped: 177668 kB' 'Shmem: 7724388 kB' 'KReclaimable: 187812 kB' 'Slab: 540460 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352648 kB' 'KernelStack: 12832 kB' 'PageTables: 9060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304784 kB' 'Committed_AS: 9341516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195780 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.663 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.663 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # continue 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.664 13:31:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.664 13:31:42 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.664 13:31:42 -- setup/common.sh@33 -- # echo 2048 00:02:39.664 13:31:42 -- setup/common.sh@33 -- # return 0 00:02:39.664 13:31:42 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:39.664 13:31:42 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:39.664 13:31:42 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:39.664 13:31:42 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:39.664 13:31:42 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:39.664 13:31:42 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:39.664 13:31:42 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:39.664 13:31:42 -- setup/hugepages.sh@207 -- # get_nodes 00:02:39.664 13:31:42 -- setup/hugepages.sh@27 -- # local node 00:02:39.664 13:31:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.664 13:31:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:39.664 13:31:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.664 13:31:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:39.664 13:31:42 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:39.664 13:31:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:39.664 13:31:42 -- setup/hugepages.sh@208 -- # clear_hp 00:02:39.664 13:31:42 -- setup/hugepages.sh@37 -- # local node hp 00:02:39.664 13:31:42 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:39.664 13:31:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:39.664 13:31:42 -- setup/hugepages.sh@41 -- # echo 0 00:02:39.664 13:31:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:39.664 13:31:42 -- setup/hugepages.sh@41 -- # echo 0 00:02:39.664 13:31:42 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:39.664 13:31:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:39.664 13:31:42 -- setup/hugepages.sh@41 -- # echo 0 00:02:39.664 13:31:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:39.664 13:31:42 -- setup/hugepages.sh@41 -- # echo 0 00:02:39.664 13:31:42 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:39.664 13:31:42 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:39.664 13:31:42 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:39.664 13:31:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:39.664 13:31:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:39.664 13:31:42 -- common/autotest_common.sh@10 -- # set +x 00:02:39.664 ************************************ 00:02:39.664 START TEST default_setup 00:02:39.664 ************************************ 00:02:39.664 13:31:42 -- common/autotest_common.sh@1111 -- # default_setup 00:02:39.664 13:31:42 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:39.664 13:31:42 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:39.664 13:31:42 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:39.664 13:31:42 -- setup/hugepages.sh@51 -- # shift 00:02:39.665 13:31:42 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:39.665 13:31:42 -- setup/hugepages.sh@52 -- # local node_ids 00:02:39.665 13:31:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:39.665 13:31:42 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:39.665 13:31:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:39.665 13:31:42 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:39.665 13:31:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:39.665 13:31:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:39.665 13:31:42 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:39.665 13:31:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:39.665 13:31:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:39.665 13:31:42 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:39.665 13:31:42 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:39.665 13:31:42 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:39.665 13:31:42 -- setup/hugepages.sh@73 -- # return 0 00:02:39.665 13:31:42 -- setup/hugepages.sh@137 -- # setup output 00:02:39.665 13:31:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:39.665 13:31:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:41.102 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:41.102 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:41.102 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:41.102 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:41.102 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:41.102 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:41.102 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:41.102 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:41.102 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:41.102 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:41.102 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:41.102 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:41.102 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:41.102 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:41.102 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:41.102 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:42.045 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:02:42.045 13:31:44 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:42.045 13:31:44 -- setup/hugepages.sh@89 -- # local node 00:02:42.045 13:31:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:42.045 13:31:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:42.045 13:31:44 -- setup/hugepages.sh@92 -- # local surp 00:02:42.045 13:31:44 -- setup/hugepages.sh@93 -- # local resv 00:02:42.045 13:31:44 -- setup/hugepages.sh@94 -- # local anon 00:02:42.045 13:31:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:42.045 13:31:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:42.045 13:31:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:42.045 13:31:44 -- setup/common.sh@18 -- # local node= 00:02:42.045 13:31:44 -- setup/common.sh@19 -- # local var val 00:02:42.045 13:31:44 -- setup/common.sh@20 -- # local mem_f mem 00:02:42.045 13:31:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.045 13:31:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.045 13:31:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.045 13:31:44 -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.045 13:31:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.045 13:31:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27694616 kB' 'MemAvailable: 31441308 kB' 'Buffers: 2696 kB' 'Cached: 11786620 kB' 'SwapCached: 0 kB' 'Active: 8789116 kB' 'Inactive: 3494528 kB' 'Active(anon): 8218820 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497660 kB' 'Mapped: 177804 kB' 'Shmem: 7724492 kB' 'KReclaimable: 187812 kB' 'Slab: 539912 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352100 kB' 'KernelStack: 13200 kB' 'PageTables: 10056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9365720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196260 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.045 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.045 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.046 13:31:44 -- setup/common.sh@33 -- # echo 0 00:02:42.046 13:31:44 -- setup/common.sh@33 -- # return 0 00:02:42.046 13:31:44 -- setup/hugepages.sh@97 -- # anon=0 00:02:42.046 13:31:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:42.046 13:31:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.046 13:31:44 -- setup/common.sh@18 -- # local node= 00:02:42.046 13:31:44 -- setup/common.sh@19 -- # local var val 00:02:42.046 13:31:44 -- setup/common.sh@20 -- # local mem_f mem 00:02:42.046 13:31:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.046 13:31:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.046 13:31:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.046 13:31:44 -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.046 13:31:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27702156 kB' 'MemAvailable: 31448848 kB' 'Buffers: 2696 kB' 'Cached: 11786620 kB' 'SwapCached: 0 kB' 'Active: 8790412 kB' 'Inactive: 3494528 kB' 'Active(anon): 8220116 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498924 kB' 'Mapped: 178180 kB' 'Shmem: 7724492 kB' 'KReclaimable: 187812 kB' 'Slab: 540040 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352228 kB' 'KernelStack: 12944 kB' 'PageTables: 9260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9367656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195892 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.046 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.046 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.047 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.047 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.048 13:31:44 -- setup/common.sh@33 -- # echo 0 00:02:42.048 13:31:44 -- setup/common.sh@33 -- # return 0 00:02:42.048 13:31:44 -- setup/hugepages.sh@99 -- # surp=0 00:02:42.048 13:31:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:42.048 13:31:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:42.048 13:31:44 -- setup/common.sh@18 -- # local node= 00:02:42.048 13:31:44 -- setup/common.sh@19 -- # local var val 00:02:42.048 13:31:44 -- setup/common.sh@20 -- # local mem_f mem 00:02:42.048 13:31:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.048 13:31:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.048 13:31:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.048 13:31:44 -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.048 13:31:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27700260 kB' 'MemAvailable: 31446952 kB' 'Buffers: 2696 kB' 'Cached: 11786636 kB' 'SwapCached: 0 kB' 'Active: 8792016 kB' 'Inactive: 3494528 kB' 'Active(anon): 8221720 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500500 kB' 'Mapped: 178164 kB' 'Shmem: 7724508 kB' 'KReclaimable: 187812 kB' 'Slab: 540092 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352280 kB' 'KernelStack: 12672 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9370852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195848 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.048 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.048 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.049 13:31:44 -- setup/common.sh@33 -- # echo 0 00:02:42.049 13:31:44 -- setup/common.sh@33 -- # return 0 00:02:42.049 13:31:44 -- setup/hugepages.sh@100 -- # resv=0 00:02:42.049 13:31:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:42.049 nr_hugepages=1024 00:02:42.049 13:31:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:42.049 resv_hugepages=0 00:02:42.049 13:31:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:42.049 surplus_hugepages=0 00:02:42.049 13:31:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:42.049 anon_hugepages=0 00:02:42.049 13:31:44 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:42.049 13:31:44 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:42.049 13:31:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:42.049 13:31:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:42.049 13:31:44 -- setup/common.sh@18 -- # local node= 00:02:42.049 13:31:44 -- setup/common.sh@19 -- # local var val 00:02:42.049 13:31:44 -- setup/common.sh@20 -- # local mem_f mem 00:02:42.049 13:31:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.049 13:31:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.049 13:31:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.049 13:31:44 -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.049 13:31:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27700356 kB' 'MemAvailable: 31447048 kB' 'Buffers: 2696 kB' 'Cached: 11786648 kB' 'SwapCached: 0 kB' 'Active: 8786796 kB' 'Inactive: 3494528 kB' 'Active(anon): 8216500 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495324 kB' 'Mapped: 178076 kB' 'Shmem: 7724520 kB' 'KReclaimable: 187812 kB' 'Slab: 540156 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352344 kB' 'KernelStack: 12704 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9364752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195876 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.049 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.049 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.050 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.050 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.051 13:31:44 -- setup/common.sh@33 -- # echo 1024 00:02:42.051 13:31:44 -- setup/common.sh@33 -- # return 0 00:02:42.051 13:31:44 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:42.051 13:31:44 -- setup/hugepages.sh@112 -- # get_nodes 00:02:42.051 13:31:44 -- setup/hugepages.sh@27 -- # local node 00:02:42.051 13:31:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.051 13:31:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:42.051 13:31:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.051 13:31:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:42.051 13:31:44 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:42.051 13:31:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:42.051 13:31:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:42.051 13:31:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:42.051 13:31:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:42.051 13:31:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.051 13:31:44 -- setup/common.sh@18 -- # local node=0 00:02:42.051 13:31:44 -- setup/common.sh@19 -- # local var val 00:02:42.051 13:31:44 -- setup/common.sh@20 -- # local mem_f mem 00:02:42.051 13:31:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.051 13:31:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:42.051 13:31:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:42.051 13:31:44 -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.051 13:31:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 11743356 kB' 'MemUsed: 12876056 kB' 'SwapCached: 0 kB' 'Active: 6632160 kB' 'Inactive: 3250928 kB' 'Active(anon): 6286908 kB' 'Inactive(anon): 0 kB' 'Active(file): 345252 kB' 'Inactive(file): 3250928 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9679352 kB' 'Mapped: 116724 kB' 'AnonPages: 206928 kB' 'Shmem: 6083172 kB' 'KernelStack: 7848 kB' 'PageTables: 5792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105756 kB' 'Slab: 277520 kB' 'SReclaimable: 105756 kB' 'SUnreclaim: 171764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.051 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.051 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # continue 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.052 13:31:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.052 13:31:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.052 13:31:44 -- setup/common.sh@33 -- # echo 0 00:02:42.052 13:31:44 -- setup/common.sh@33 -- # return 0 00:02:42.052 13:31:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:42.052 13:31:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:42.052 13:31:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:42.052 13:31:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:42.052 13:31:44 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:42.052 node0=1024 expecting 1024 00:02:42.052 13:31:44 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:42.052 00:02:42.052 real 0m2.404s 00:02:42.052 user 0m0.607s 00:02:42.052 sys 0m0.804s 00:02:42.052 13:31:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:42.052 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:02:42.052 ************************************ 00:02:42.052 END TEST default_setup 00:02:42.052 ************************************ 00:02:42.052 13:31:44 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:42.052 13:31:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:42.052 13:31:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:42.052 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:02:42.312 ************************************ 00:02:42.312 START TEST per_node_1G_alloc 00:02:42.312 ************************************ 00:02:42.312 13:31:44 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:02:42.312 13:31:44 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:42.312 13:31:44 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:42.312 13:31:44 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:42.312 13:31:44 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:42.312 13:31:44 -- setup/hugepages.sh@51 -- # shift 00:02:42.312 13:31:44 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:42.312 13:31:44 -- setup/hugepages.sh@52 -- # local node_ids 00:02:42.312 13:31:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:42.312 13:31:44 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:42.312 13:31:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:42.312 13:31:44 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:42.312 13:31:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:42.312 13:31:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:42.312 13:31:44 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:42.312 13:31:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:42.312 13:31:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:42.312 13:31:44 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:42.312 13:31:44 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:42.312 13:31:44 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:42.312 13:31:44 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:42.312 13:31:44 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:42.312 13:31:44 -- setup/hugepages.sh@73 -- # return 0 00:02:42.312 13:31:44 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:42.312 13:31:44 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:42.312 13:31:44 -- setup/hugepages.sh@146 -- # setup output 00:02:42.312 13:31:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.312 13:31:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:43.253 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:43.254 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:43.254 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:43.254 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:43.254 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:43.254 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:43.254 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:43.254 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:43.254 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:43.254 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:43.254 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:43.254 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:43.254 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:43.254 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:43.254 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:43.254 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:43.254 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:43.254 13:31:46 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:43.254 13:31:46 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:43.254 13:31:46 -- setup/hugepages.sh@89 -- # local node 00:02:43.254 13:31:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:43.254 13:31:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:43.254 13:31:46 -- setup/hugepages.sh@92 -- # local surp 00:02:43.254 13:31:46 -- setup/hugepages.sh@93 -- # local resv 00:02:43.254 13:31:46 -- setup/hugepages.sh@94 -- # local anon 00:02:43.254 13:31:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:43.254 13:31:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:43.254 13:31:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:43.254 13:31:46 -- setup/common.sh@18 -- # local node= 00:02:43.254 13:31:46 -- setup/common.sh@19 -- # local var val 00:02:43.254 13:31:46 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.254 13:31:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.254 13:31:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.254 13:31:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.254 13:31:46 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.254 13:31:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27681312 kB' 'MemAvailable: 31428004 kB' 'Buffers: 2696 kB' 'Cached: 11786708 kB' 'SwapCached: 0 kB' 'Active: 8787304 kB' 'Inactive: 3494528 kB' 'Active(anon): 8217008 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495856 kB' 'Mapped: 178240 kB' 'Shmem: 7724580 kB' 'KReclaimable: 187812 kB' 'Slab: 540372 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352560 kB' 'KernelStack: 12752 kB' 'PageTables: 9056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9364920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196004 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.254 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.254 13:31:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.255 13:31:46 -- setup/common.sh@33 -- # echo 0 00:02:43.255 13:31:46 -- setup/common.sh@33 -- # return 0 00:02:43.255 13:31:46 -- setup/hugepages.sh@97 -- # anon=0 00:02:43.255 13:31:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:43.255 13:31:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.255 13:31:46 -- setup/common.sh@18 -- # local node= 00:02:43.255 13:31:46 -- setup/common.sh@19 -- # local var val 00:02:43.255 13:31:46 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.255 13:31:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.255 13:31:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.255 13:31:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.255 13:31:46 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.255 13:31:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27683756 kB' 'MemAvailable: 31430448 kB' 'Buffers: 2696 kB' 'Cached: 11786708 kB' 'SwapCached: 0 kB' 'Active: 8787836 kB' 'Inactive: 3494528 kB' 'Active(anon): 8217540 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496416 kB' 'Mapped: 177760 kB' 'Shmem: 7724580 kB' 'KReclaimable: 187812 kB' 'Slab: 540356 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352544 kB' 'KernelStack: 12752 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9364932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195940 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.255 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.255 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.256 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.256 13:31:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.256 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.256 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.256 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.256 13:31:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.256 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.256 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.256 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.256 13:31:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.256 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.256 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.256 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.256 13:31:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.256 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.256 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.256 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.519 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.519 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.520 13:31:46 -- setup/common.sh@33 -- # echo 0 00:02:43.520 13:31:46 -- setup/common.sh@33 -- # return 0 00:02:43.520 13:31:46 -- setup/hugepages.sh@99 -- # surp=0 00:02:43.520 13:31:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:43.520 13:31:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:43.520 13:31:46 -- setup/common.sh@18 -- # local node= 00:02:43.520 13:31:46 -- setup/common.sh@19 -- # local var val 00:02:43.520 13:31:46 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.520 13:31:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.520 13:31:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.520 13:31:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.520 13:31:46 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.520 13:31:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27685132 kB' 'MemAvailable: 31431824 kB' 'Buffers: 2696 kB' 'Cached: 11786720 kB' 'SwapCached: 0 kB' 'Active: 8786900 kB' 'Inactive: 3494528 kB' 'Active(anon): 8216604 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495400 kB' 'Mapped: 177752 kB' 'Shmem: 7724592 kB' 'KReclaimable: 187812 kB' 'Slab: 540384 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352572 kB' 'KernelStack: 12736 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9364944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195956 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.520 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.520 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.521 13:31:46 -- setup/common.sh@33 -- # echo 0 00:02:43.521 13:31:46 -- setup/common.sh@33 -- # return 0 00:02:43.521 13:31:46 -- setup/hugepages.sh@100 -- # resv=0 00:02:43.521 13:31:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:43.521 nr_hugepages=1024 00:02:43.521 13:31:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:43.521 resv_hugepages=0 00:02:43.521 13:31:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:43.521 surplus_hugepages=0 00:02:43.521 13:31:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:43.521 anon_hugepages=0 00:02:43.521 13:31:46 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.521 13:31:46 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:43.521 13:31:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:43.521 13:31:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:43.521 13:31:46 -- setup/common.sh@18 -- # local node= 00:02:43.521 13:31:46 -- setup/common.sh@19 -- # local var val 00:02:43.521 13:31:46 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.521 13:31:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.521 13:31:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.521 13:31:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.521 13:31:46 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.521 13:31:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27685344 kB' 'MemAvailable: 31432036 kB' 'Buffers: 2696 kB' 'Cached: 11786736 kB' 'SwapCached: 0 kB' 'Active: 8786900 kB' 'Inactive: 3494528 kB' 'Active(anon): 8216604 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495404 kB' 'Mapped: 177752 kB' 'Shmem: 7724608 kB' 'KReclaimable: 187812 kB' 'Slab: 540384 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352572 kB' 'KernelStack: 12736 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9364960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195956 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.521 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.521 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.522 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.522 13:31:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.522 13:31:46 -- setup/common.sh@33 -- # echo 1024 00:02:43.522 13:31:46 -- setup/common.sh@33 -- # return 0 00:02:43.522 13:31:46 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.522 13:31:46 -- setup/hugepages.sh@112 -- # get_nodes 00:02:43.522 13:31:46 -- setup/hugepages.sh@27 -- # local node 00:02:43.522 13:31:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.522 13:31:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:43.522 13:31:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.522 13:31:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:43.522 13:31:46 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:43.522 13:31:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:43.522 13:31:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.523 13:31:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.523 13:31:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:43.523 13:31:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.523 13:31:46 -- setup/common.sh@18 -- # local node=0 00:02:43.523 13:31:46 -- setup/common.sh@19 -- # local var val 00:02:43.523 13:31:46 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.523 13:31:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.523 13:31:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:43.523 13:31:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:43.523 13:31:46 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.523 13:31:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12781572 kB' 'MemUsed: 11837840 kB' 'SwapCached: 0 kB' 'Active: 6632068 kB' 'Inactive: 3250928 kB' 'Active(anon): 6286816 kB' 'Inactive(anon): 0 kB' 'Active(file): 345252 kB' 'Inactive(file): 3250928 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9679428 kB' 'Mapped: 116752 kB' 'AnonPages: 206784 kB' 'Shmem: 6083248 kB' 'KernelStack: 7896 kB' 'PageTables: 5796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105756 kB' 'Slab: 277604 kB' 'SReclaimable: 105756 kB' 'SUnreclaim: 171848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.523 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.523 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@33 -- # echo 0 00:02:43.524 13:31:46 -- setup/common.sh@33 -- # return 0 00:02:43.524 13:31:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.524 13:31:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.524 13:31:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.524 13:31:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:43.524 13:31:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.524 13:31:46 -- setup/common.sh@18 -- # local node=1 00:02:43.524 13:31:46 -- setup/common.sh@19 -- # local var val 00:02:43.524 13:31:46 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.524 13:31:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.524 13:31:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:43.524 13:31:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:43.524 13:31:46 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.524 13:31:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407256 kB' 'MemFree: 14904136 kB' 'MemUsed: 4503120 kB' 'SwapCached: 0 kB' 'Active: 2154860 kB' 'Inactive: 243600 kB' 'Active(anon): 1929816 kB' 'Inactive(anon): 0 kB' 'Active(file): 225044 kB' 'Inactive(file): 243600 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2110020 kB' 'Mapped: 61000 kB' 'AnonPages: 288624 kB' 'Shmem: 1641376 kB' 'KernelStack: 4840 kB' 'PageTables: 3088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82056 kB' 'Slab: 262780 kB' 'SReclaimable: 82056 kB' 'SUnreclaim: 180724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.524 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.524 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # continue 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.525 13:31:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.525 13:31:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.525 13:31:46 -- setup/common.sh@33 -- # echo 0 00:02:43.525 13:31:46 -- setup/common.sh@33 -- # return 0 00:02:43.525 13:31:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.525 13:31:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.525 13:31:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.525 13:31:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.525 13:31:46 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:43.525 node0=512 expecting 512 00:02:43.525 13:31:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.525 13:31:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.525 13:31:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.525 13:31:46 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:43.525 node1=512 expecting 512 00:02:43.525 13:31:46 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:43.525 00:02:43.525 real 0m1.242s 00:02:43.525 user 0m0.515s 00:02:43.525 sys 0m0.696s 00:02:43.525 13:31:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:43.525 13:31:46 -- common/autotest_common.sh@10 -- # set +x 00:02:43.525 ************************************ 00:02:43.525 END TEST per_node_1G_alloc 00:02:43.525 ************************************ 00:02:43.525 13:31:46 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:43.525 13:31:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:43.525 13:31:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:43.525 13:31:46 -- common/autotest_common.sh@10 -- # set +x 00:02:43.525 ************************************ 00:02:43.525 START TEST even_2G_alloc 00:02:43.525 ************************************ 00:02:43.525 13:31:46 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:02:43.525 13:31:46 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:43.525 13:31:46 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:43.525 13:31:46 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:43.525 13:31:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:43.525 13:31:46 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:43.525 13:31:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:43.525 13:31:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:43.525 13:31:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:43.525 13:31:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:43.525 13:31:46 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:43.525 13:31:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:43.525 13:31:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:43.525 13:31:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:43.525 13:31:46 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:43.525 13:31:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.525 13:31:46 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:43.525 13:31:46 -- setup/hugepages.sh@83 -- # : 512 00:02:43.525 13:31:46 -- setup/hugepages.sh@84 -- # : 1 00:02:43.525 13:31:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.525 13:31:46 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:43.525 13:31:46 -- setup/hugepages.sh@83 -- # : 0 00:02:43.525 13:31:46 -- setup/hugepages.sh@84 -- # : 0 00:02:43.525 13:31:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.525 13:31:46 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:43.525 13:31:46 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:43.525 13:31:46 -- setup/hugepages.sh@153 -- # setup output 00:02:43.525 13:31:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.525 13:31:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:44.904 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.904 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:44.904 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.904 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.904 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.904 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.904 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.904 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.904 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.904 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.904 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.904 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.904 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.904 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.904 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.904 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.904 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.904 13:31:47 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:44.904 13:31:47 -- setup/hugepages.sh@89 -- # local node 00:02:44.904 13:31:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:44.904 13:31:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:44.904 13:31:47 -- setup/hugepages.sh@92 -- # local surp 00:02:44.904 13:31:47 -- setup/hugepages.sh@93 -- # local resv 00:02:44.904 13:31:47 -- setup/hugepages.sh@94 -- # local anon 00:02:44.904 13:31:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:44.904 13:31:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:44.904 13:31:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:44.904 13:31:47 -- setup/common.sh@18 -- # local node= 00:02:44.904 13:31:47 -- setup/common.sh@19 -- # local var val 00:02:44.904 13:31:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.904 13:31:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.904 13:31:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.904 13:31:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.904 13:31:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.904 13:31:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27697212 kB' 'MemAvailable: 31443904 kB' 'Buffers: 2696 kB' 'Cached: 11786800 kB' 'SwapCached: 0 kB' 'Active: 8785772 kB' 'Inactive: 3494528 kB' 'Active(anon): 8215476 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493924 kB' 'Mapped: 177788 kB' 'Shmem: 7724672 kB' 'KReclaimable: 187812 kB' 'Slab: 540116 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352304 kB' 'KernelStack: 12736 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9360476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196084 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.904 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.904 13:31:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.905 13:31:47 -- setup/common.sh@33 -- # echo 0 00:02:44.905 13:31:47 -- setup/common.sh@33 -- # return 0 00:02:44.905 13:31:47 -- setup/hugepages.sh@97 -- # anon=0 00:02:44.905 13:31:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:44.905 13:31:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.905 13:31:47 -- setup/common.sh@18 -- # local node= 00:02:44.905 13:31:47 -- setup/common.sh@19 -- # local var val 00:02:44.905 13:31:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.905 13:31:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.905 13:31:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.905 13:31:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.905 13:31:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.905 13:31:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27698120 kB' 'MemAvailable: 31444812 kB' 'Buffers: 2696 kB' 'Cached: 11786804 kB' 'SwapCached: 0 kB' 'Active: 8785392 kB' 'Inactive: 3494528 kB' 'Active(anon): 8215096 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493612 kB' 'Mapped: 177852 kB' 'Shmem: 7724676 kB' 'KReclaimable: 187812 kB' 'Slab: 540108 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352296 kB' 'KernelStack: 12672 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9360488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.905 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.905 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.906 13:31:47 -- setup/common.sh@33 -- # echo 0 00:02:44.906 13:31:47 -- setup/common.sh@33 -- # return 0 00:02:44.906 13:31:47 -- setup/hugepages.sh@99 -- # surp=0 00:02:44.906 13:31:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:44.906 13:31:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:44.906 13:31:47 -- setup/common.sh@18 -- # local node= 00:02:44.906 13:31:47 -- setup/common.sh@19 -- # local var val 00:02:44.906 13:31:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.906 13:31:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.906 13:31:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.906 13:31:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.906 13:31:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.906 13:31:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.906 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.906 13:31:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27698556 kB' 'MemAvailable: 31445248 kB' 'Buffers: 2696 kB' 'Cached: 11786816 kB' 'SwapCached: 0 kB' 'Active: 8785284 kB' 'Inactive: 3494528 kB' 'Active(anon): 8214988 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493472 kB' 'Mapped: 177776 kB' 'Shmem: 7724688 kB' 'KReclaimable: 187812 kB' 'Slab: 540096 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352284 kB' 'KernelStack: 12720 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9360504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.907 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.907 13:31:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.908 13:31:47 -- setup/common.sh@33 -- # echo 0 00:02:44.908 13:31:47 -- setup/common.sh@33 -- # return 0 00:02:44.908 13:31:47 -- setup/hugepages.sh@100 -- # resv=0 00:02:44.908 13:31:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:44.908 nr_hugepages=1024 00:02:44.908 13:31:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:44.908 resv_hugepages=0 00:02:44.908 13:31:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:44.908 surplus_hugepages=0 00:02:44.908 13:31:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:44.908 anon_hugepages=0 00:02:44.908 13:31:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.908 13:31:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:44.908 13:31:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:44.908 13:31:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:44.908 13:31:47 -- setup/common.sh@18 -- # local node= 00:02:44.908 13:31:47 -- setup/common.sh@19 -- # local var val 00:02:44.908 13:31:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.908 13:31:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.908 13:31:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.908 13:31:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.908 13:31:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.908 13:31:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27698120 kB' 'MemAvailable: 31444812 kB' 'Buffers: 2696 kB' 'Cached: 11786828 kB' 'SwapCached: 0 kB' 'Active: 8785296 kB' 'Inactive: 3494528 kB' 'Active(anon): 8215000 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493472 kB' 'Mapped: 177776 kB' 'Shmem: 7724700 kB' 'KReclaimable: 187812 kB' 'Slab: 540080 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352268 kB' 'KernelStack: 12720 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9360516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.908 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.908 13:31:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.909 13:31:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.909 13:31:47 -- setup/common.sh@33 -- # echo 1024 00:02:44.909 13:31:47 -- setup/common.sh@33 -- # return 0 00:02:44.909 13:31:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.909 13:31:47 -- setup/hugepages.sh@112 -- # get_nodes 00:02:44.909 13:31:47 -- setup/hugepages.sh@27 -- # local node 00:02:44.909 13:31:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.909 13:31:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:44.909 13:31:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.909 13:31:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:44.909 13:31:47 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:44.909 13:31:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:44.909 13:31:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:44.909 13:31:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:44.909 13:31:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:44.909 13:31:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.909 13:31:47 -- setup/common.sh@18 -- # local node=0 00:02:44.909 13:31:47 -- setup/common.sh@19 -- # local var val 00:02:44.909 13:31:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.909 13:31:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.909 13:31:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:44.909 13:31:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:44.909 13:31:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.909 13:31:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.909 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12805384 kB' 'MemUsed: 11814028 kB' 'SwapCached: 0 kB' 'Active: 6630720 kB' 'Inactive: 3250928 kB' 'Active(anon): 6285468 kB' 'Inactive(anon): 0 kB' 'Active(file): 345252 kB' 'Inactive(file): 3250928 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9679520 kB' 'Mapped: 116780 kB' 'AnonPages: 205300 kB' 'Shmem: 6083340 kB' 'KernelStack: 7928 kB' 'PageTables: 5720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105756 kB' 'Slab: 277420 kB' 'SReclaimable: 105756 kB' 'SUnreclaim: 171664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.910 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.910 13:31:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.910 13:31:47 -- setup/common.sh@33 -- # echo 0 00:02:44.910 13:31:47 -- setup/common.sh@33 -- # return 0 00:02:44.910 13:31:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:44.910 13:31:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:44.910 13:31:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:44.910 13:31:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:44.910 13:31:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.910 13:31:47 -- setup/common.sh@18 -- # local node=1 00:02:44.910 13:31:47 -- setup/common.sh@19 -- # local var val 00:02:44.910 13:31:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.910 13:31:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.910 13:31:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:44.910 13:31:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:44.910 13:31:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.911 13:31:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407256 kB' 'MemFree: 14892736 kB' 'MemUsed: 4514520 kB' 'SwapCached: 0 kB' 'Active: 2154596 kB' 'Inactive: 243600 kB' 'Active(anon): 1929552 kB' 'Inactive(anon): 0 kB' 'Active(file): 225044 kB' 'Inactive(file): 243600 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2110020 kB' 'Mapped: 60996 kB' 'AnonPages: 288176 kB' 'Shmem: 1641376 kB' 'KernelStack: 4792 kB' 'PageTables: 2960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82056 kB' 'Slab: 262660 kB' 'SReclaimable: 82056 kB' 'SUnreclaim: 180604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # continue 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.911 13:31:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.911 13:31:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.911 13:31:47 -- setup/common.sh@33 -- # echo 0 00:02:44.911 13:31:47 -- setup/common.sh@33 -- # return 0 00:02:44.911 13:31:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:44.911 13:31:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:44.911 13:31:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:44.912 13:31:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:44.912 13:31:47 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:44.912 node0=512 expecting 512 00:02:44.912 13:31:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:44.912 13:31:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:44.912 13:31:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:44.912 13:31:47 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:44.912 node1=512 expecting 512 00:02:44.912 13:31:47 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:44.912 00:02:44.912 real 0m1.361s 00:02:44.912 user 0m0.594s 00:02:44.912 sys 0m0.742s 00:02:44.912 13:31:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:44.912 13:31:47 -- common/autotest_common.sh@10 -- # set +x 00:02:44.912 ************************************ 00:02:44.912 END TEST even_2G_alloc 00:02:44.912 ************************************ 00:02:44.912 13:31:47 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:44.912 13:31:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:44.912 13:31:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:44.912 13:31:47 -- common/autotest_common.sh@10 -- # set +x 00:02:45.170 ************************************ 00:02:45.170 START TEST odd_alloc 00:02:45.170 ************************************ 00:02:45.170 13:31:47 -- common/autotest_common.sh@1111 -- # odd_alloc 00:02:45.170 13:31:47 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:45.170 13:31:47 -- setup/hugepages.sh@49 -- # local size=2098176 00:02:45.170 13:31:47 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:45.170 13:31:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:45.170 13:31:47 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:45.170 13:31:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:45.170 13:31:47 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:45.170 13:31:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:45.170 13:31:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:45.170 13:31:47 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:45.170 13:31:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:45.170 13:31:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:45.170 13:31:47 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:45.170 13:31:47 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:45.170 13:31:47 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:45.170 13:31:47 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:45.170 13:31:47 -- setup/hugepages.sh@83 -- # : 513 00:02:45.170 13:31:47 -- setup/hugepages.sh@84 -- # : 1 00:02:45.170 13:31:47 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:45.170 13:31:47 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:45.170 13:31:47 -- setup/hugepages.sh@83 -- # : 0 00:02:45.170 13:31:47 -- setup/hugepages.sh@84 -- # : 0 00:02:45.170 13:31:47 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:45.170 13:31:47 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:45.170 13:31:47 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:45.170 13:31:47 -- setup/hugepages.sh@160 -- # setup output 00:02:45.170 13:31:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:45.170 13:31:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:46.107 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:46.107 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:46.107 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:46.107 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:46.107 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:46.107 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:46.107 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:46.107 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:46.107 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:46.107 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:46.107 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:46.107 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:46.107 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:46.107 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:46.107 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:46.107 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:46.107 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:46.370 13:31:49 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:46.370 13:31:49 -- setup/hugepages.sh@89 -- # local node 00:02:46.370 13:31:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:46.370 13:31:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:46.370 13:31:49 -- setup/hugepages.sh@92 -- # local surp 00:02:46.370 13:31:49 -- setup/hugepages.sh@93 -- # local resv 00:02:46.370 13:31:49 -- setup/hugepages.sh@94 -- # local anon 00:02:46.370 13:31:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:46.370 13:31:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:46.370 13:31:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:46.370 13:31:49 -- setup/common.sh@18 -- # local node= 00:02:46.370 13:31:49 -- setup/common.sh@19 -- # local var val 00:02:46.370 13:31:49 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.370 13:31:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.370 13:31:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.370 13:31:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.370 13:31:49 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.370 13:31:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27707168 kB' 'MemAvailable: 31453860 kB' 'Buffers: 2696 kB' 'Cached: 11786904 kB' 'SwapCached: 0 kB' 'Active: 8778216 kB' 'Inactive: 3494528 kB' 'Active(anon): 8207920 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486460 kB' 'Mapped: 176772 kB' 'Shmem: 7724776 kB' 'KReclaimable: 187812 kB' 'Slab: 539764 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 351952 kB' 'KernelStack: 12640 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352336 kB' 'Committed_AS: 9331728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195956 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.370 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.370 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.371 13:31:49 -- setup/common.sh@33 -- # echo 0 00:02:46.371 13:31:49 -- setup/common.sh@33 -- # return 0 00:02:46.371 13:31:49 -- setup/hugepages.sh@97 -- # anon=0 00:02:46.371 13:31:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:46.371 13:31:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.371 13:31:49 -- setup/common.sh@18 -- # local node= 00:02:46.371 13:31:49 -- setup/common.sh@19 -- # local var val 00:02:46.371 13:31:49 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.371 13:31:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.371 13:31:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.371 13:31:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.371 13:31:49 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.371 13:31:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27708328 kB' 'MemAvailable: 31455020 kB' 'Buffers: 2696 kB' 'Cached: 11786908 kB' 'SwapCached: 0 kB' 'Active: 8778416 kB' 'Inactive: 3494528 kB' 'Active(anon): 8208120 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486680 kB' 'Mapped: 176848 kB' 'Shmem: 7724780 kB' 'KReclaimable: 187812 kB' 'Slab: 539764 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 351952 kB' 'KernelStack: 12608 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352336 kB' 'Committed_AS: 9331740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195924 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.371 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.371 13:31:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.372 13:31:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.372 13:31:49 -- setup/common.sh@33 -- # echo 0 00:02:46.372 13:31:49 -- setup/common.sh@33 -- # return 0 00:02:46.372 13:31:49 -- setup/hugepages.sh@99 -- # surp=0 00:02:46.372 13:31:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:46.372 13:31:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:46.372 13:31:49 -- setup/common.sh@18 -- # local node= 00:02:46.372 13:31:49 -- setup/common.sh@19 -- # local var val 00:02:46.372 13:31:49 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.372 13:31:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.372 13:31:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.372 13:31:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.372 13:31:49 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.372 13:31:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.372 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27708424 kB' 'MemAvailable: 31455116 kB' 'Buffers: 2696 kB' 'Cached: 11786920 kB' 'SwapCached: 0 kB' 'Active: 8778152 kB' 'Inactive: 3494528 kB' 'Active(anon): 8207856 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486396 kB' 'Mapped: 176824 kB' 'Shmem: 7724792 kB' 'KReclaimable: 187812 kB' 'Slab: 539764 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 351952 kB' 'KernelStack: 12608 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352336 kB' 'Committed_AS: 9331756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195924 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.373 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.373 13:31:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.374 13:31:49 -- setup/common.sh@33 -- # echo 0 00:02:46.374 13:31:49 -- setup/common.sh@33 -- # return 0 00:02:46.374 13:31:49 -- setup/hugepages.sh@100 -- # resv=0 00:02:46.374 13:31:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:46.374 nr_hugepages=1025 00:02:46.374 13:31:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:46.374 resv_hugepages=0 00:02:46.374 13:31:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:46.374 surplus_hugepages=0 00:02:46.374 13:31:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:46.374 anon_hugepages=0 00:02:46.374 13:31:49 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:46.374 13:31:49 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:46.374 13:31:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:46.374 13:31:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:46.374 13:31:49 -- setup/common.sh@18 -- # local node= 00:02:46.374 13:31:49 -- setup/common.sh@19 -- # local var val 00:02:46.374 13:31:49 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.374 13:31:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.374 13:31:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.374 13:31:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.374 13:31:49 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.374 13:31:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27709152 kB' 'MemAvailable: 31455844 kB' 'Buffers: 2696 kB' 'Cached: 11786932 kB' 'SwapCached: 0 kB' 'Active: 8777568 kB' 'Inactive: 3494528 kB' 'Active(anon): 8207272 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485720 kB' 'Mapped: 176744 kB' 'Shmem: 7724804 kB' 'KReclaimable: 187812 kB' 'Slab: 539748 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 351936 kB' 'KernelStack: 12560 kB' 'PageTables: 7796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352336 kB' 'Committed_AS: 9331772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195924 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.374 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.374 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.375 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.375 13:31:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.376 13:31:49 -- setup/common.sh@33 -- # echo 1025 00:02:46.376 13:31:49 -- setup/common.sh@33 -- # return 0 00:02:46.376 13:31:49 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:46.376 13:31:49 -- setup/hugepages.sh@112 -- # get_nodes 00:02:46.376 13:31:49 -- setup/hugepages.sh@27 -- # local node 00:02:46.376 13:31:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.376 13:31:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:46.376 13:31:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.376 13:31:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:46.376 13:31:49 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:46.376 13:31:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:46.376 13:31:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.376 13:31:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.376 13:31:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:46.376 13:31:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.376 13:31:49 -- setup/common.sh@18 -- # local node=0 00:02:46.376 13:31:49 -- setup/common.sh@19 -- # local var val 00:02:46.376 13:31:49 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.376 13:31:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.376 13:31:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:46.376 13:31:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:46.376 13:31:49 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.376 13:31:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12803328 kB' 'MemUsed: 11816084 kB' 'SwapCached: 0 kB' 'Active: 6626764 kB' 'Inactive: 3250928 kB' 'Active(anon): 6281512 kB' 'Inactive(anon): 0 kB' 'Active(file): 345252 kB' 'Inactive(file): 3250928 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9679612 kB' 'Mapped: 115844 kB' 'AnonPages: 201248 kB' 'Shmem: 6083432 kB' 'KernelStack: 7800 kB' 'PageTables: 5112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105756 kB' 'Slab: 277260 kB' 'SReclaimable: 105756 kB' 'SUnreclaim: 171504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.376 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.376 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@33 -- # echo 0 00:02:46.377 13:31:49 -- setup/common.sh@33 -- # return 0 00:02:46.377 13:31:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.377 13:31:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.377 13:31:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.377 13:31:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:46.377 13:31:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.377 13:31:49 -- setup/common.sh@18 -- # local node=1 00:02:46.377 13:31:49 -- setup/common.sh@19 -- # local var val 00:02:46.377 13:31:49 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.377 13:31:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.377 13:31:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:46.377 13:31:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:46.377 13:31:49 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.377 13:31:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407256 kB' 'MemFree: 14907428 kB' 'MemUsed: 4499828 kB' 'SwapCached: 0 kB' 'Active: 2151276 kB' 'Inactive: 243600 kB' 'Active(anon): 1926232 kB' 'Inactive(anon): 0 kB' 'Active(file): 225044 kB' 'Inactive(file): 243600 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2110032 kB' 'Mapped: 60896 kB' 'AnonPages: 284916 kB' 'Shmem: 1641388 kB' 'KernelStack: 4808 kB' 'PageTables: 2936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82056 kB' 'Slab: 262488 kB' 'SReclaimable: 82056 kB' 'SUnreclaim: 180432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.377 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.377 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # continue 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.378 13:31:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.378 13:31:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.378 13:31:49 -- setup/common.sh@33 -- # echo 0 00:02:46.378 13:31:49 -- setup/common.sh@33 -- # return 0 00:02:46.378 13:31:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.378 13:31:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.378 13:31:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.378 13:31:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.378 13:31:49 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:46.378 node0=512 expecting 513 00:02:46.378 13:31:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.378 13:31:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.378 13:31:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.378 13:31:49 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:46.378 node1=513 expecting 512 00:02:46.378 13:31:49 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:46.378 00:02:46.378 real 0m1.393s 00:02:46.378 user 0m0.552s 00:02:46.378 sys 0m0.813s 00:02:46.378 13:31:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:46.378 13:31:49 -- common/autotest_common.sh@10 -- # set +x 00:02:46.378 ************************************ 00:02:46.378 END TEST odd_alloc 00:02:46.378 ************************************ 00:02:46.637 13:31:49 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:46.637 13:31:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:46.637 13:31:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:46.637 13:31:49 -- common/autotest_common.sh@10 -- # set +x 00:02:46.637 ************************************ 00:02:46.637 START TEST custom_alloc 00:02:46.637 ************************************ 00:02:46.637 13:31:49 -- common/autotest_common.sh@1111 -- # custom_alloc 00:02:46.637 13:31:49 -- setup/hugepages.sh@167 -- # local IFS=, 00:02:46.637 13:31:49 -- setup/hugepages.sh@169 -- # local node 00:02:46.637 13:31:49 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:46.637 13:31:49 -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:46.637 13:31:49 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:46.637 13:31:49 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:46.637 13:31:49 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:46.637 13:31:49 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:46.637 13:31:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:46.637 13:31:49 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:46.637 13:31:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:46.637 13:31:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:46.637 13:31:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:46.637 13:31:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:46.637 13:31:49 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:46.637 13:31:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:46.637 13:31:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:46.637 13:31:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:46.637 13:31:49 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:46.637 13:31:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:46.637 13:31:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:46.637 13:31:49 -- setup/hugepages.sh@83 -- # : 256 00:02:46.637 13:31:49 -- setup/hugepages.sh@84 -- # : 1 00:02:46.637 13:31:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:46.637 13:31:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:46.637 13:31:49 -- setup/hugepages.sh@83 -- # : 0 00:02:46.637 13:31:49 -- setup/hugepages.sh@84 -- # : 0 00:02:46.637 13:31:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:46.637 13:31:49 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:46.637 13:31:49 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:46.637 13:31:49 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:46.637 13:31:49 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:46.637 13:31:49 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:46.637 13:31:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:46.637 13:31:49 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:46.637 13:31:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:46.637 13:31:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:46.637 13:31:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:46.637 13:31:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:46.637 13:31:49 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:46.637 13:31:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:46.637 13:31:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:46.637 13:31:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:46.637 13:31:49 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:46.637 13:31:49 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:46.637 13:31:49 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:46.637 13:31:49 -- setup/hugepages.sh@78 -- # return 0 00:02:46.637 13:31:49 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:46.637 13:31:49 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:46.637 13:31:49 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:46.637 13:31:49 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:46.637 13:31:49 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:46.637 13:31:49 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:46.637 13:31:49 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:46.637 13:31:49 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:46.637 13:31:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:46.637 13:31:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:46.637 13:31:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:46.638 13:31:49 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:46.638 13:31:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:46.638 13:31:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:46.638 13:31:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:46.638 13:31:49 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:46.638 13:31:49 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:46.638 13:31:49 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:46.638 13:31:49 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:46.638 13:31:49 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:46.638 13:31:49 -- setup/hugepages.sh@78 -- # return 0 00:02:46.638 13:31:49 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:46.638 13:31:49 -- setup/hugepages.sh@187 -- # setup output 00:02:46.638 13:31:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.638 13:31:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:47.612 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:47.612 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:47.612 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:47.612 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:47.612 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:47.612 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:47.612 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:47.612 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:47.612 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:47.612 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:47.612 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:47.612 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:47.612 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:47.612 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:47.612 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:47.612 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:47.612 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:47.880 13:31:50 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:47.880 13:31:50 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:47.880 13:31:50 -- setup/hugepages.sh@89 -- # local node 00:02:47.880 13:31:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:47.880 13:31:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:47.880 13:31:50 -- setup/hugepages.sh@92 -- # local surp 00:02:47.880 13:31:50 -- setup/hugepages.sh@93 -- # local resv 00:02:47.880 13:31:50 -- setup/hugepages.sh@94 -- # local anon 00:02:47.880 13:31:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:47.880 13:31:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:47.880 13:31:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:47.881 13:31:50 -- setup/common.sh@18 -- # local node= 00:02:47.881 13:31:50 -- setup/common.sh@19 -- # local var val 00:02:47.881 13:31:50 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.881 13:31:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.881 13:31:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.881 13:31:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.881 13:31:50 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.881 13:31:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 26692364 kB' 'MemAvailable: 30439056 kB' 'Buffers: 2696 kB' 'Cached: 11787000 kB' 'SwapCached: 0 kB' 'Active: 8784220 kB' 'Inactive: 3494528 kB' 'Active(anon): 8213924 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492312 kB' 'Mapped: 177332 kB' 'Shmem: 7724872 kB' 'KReclaimable: 187812 kB' 'Slab: 539672 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 351860 kB' 'KernelStack: 12560 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829072 kB' 'Committed_AS: 9338068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195976 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.881 13:31:50 -- setup/common.sh@33 -- # echo 0 00:02:47.881 13:31:50 -- setup/common.sh@33 -- # return 0 00:02:47.881 13:31:50 -- setup/hugepages.sh@97 -- # anon=0 00:02:47.881 13:31:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:47.881 13:31:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.881 13:31:50 -- setup/common.sh@18 -- # local node= 00:02:47.881 13:31:50 -- setup/common.sh@19 -- # local var val 00:02:47.881 13:31:50 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.881 13:31:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.881 13:31:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.881 13:31:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.881 13:31:50 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.881 13:31:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 26699816 kB' 'MemAvailable: 30446508 kB' 'Buffers: 2696 kB' 'Cached: 11787000 kB' 'SwapCached: 0 kB' 'Active: 8784480 kB' 'Inactive: 3494528 kB' 'Active(anon): 8214184 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492576 kB' 'Mapped: 177736 kB' 'Shmem: 7724872 kB' 'KReclaimable: 187812 kB' 'Slab: 539672 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 351860 kB' 'KernelStack: 12576 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829072 kB' 'Committed_AS: 9338080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195960 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.881 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.881 13:31:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.882 13:31:50 -- setup/common.sh@33 -- # echo 0 00:02:47.882 13:31:50 -- setup/common.sh@33 -- # return 0 00:02:47.882 13:31:50 -- setup/hugepages.sh@99 -- # surp=0 00:02:47.882 13:31:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:47.882 13:31:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:47.882 13:31:50 -- setup/common.sh@18 -- # local node= 00:02:47.882 13:31:50 -- setup/common.sh@19 -- # local var val 00:02:47.882 13:31:50 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.882 13:31:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.882 13:31:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.882 13:31:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.882 13:31:50 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.882 13:31:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 26695692 kB' 'MemAvailable: 30442384 kB' 'Buffers: 2696 kB' 'Cached: 11787000 kB' 'SwapCached: 0 kB' 'Active: 8780896 kB' 'Inactive: 3494528 kB' 'Active(anon): 8210600 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489008 kB' 'Mapped: 177204 kB' 'Shmem: 7724872 kB' 'KReclaimable: 187812 kB' 'Slab: 539672 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 351860 kB' 'KernelStack: 12640 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829072 kB' 'Committed_AS: 9335312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195956 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.882 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.882 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.883 13:31:50 -- setup/common.sh@33 -- # echo 0 00:02:47.883 13:31:50 -- setup/common.sh@33 -- # return 0 00:02:47.883 13:31:50 -- setup/hugepages.sh@100 -- # resv=0 00:02:47.883 13:31:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:47.883 nr_hugepages=1536 00:02:47.883 13:31:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:47.883 resv_hugepages=0 00:02:47.883 13:31:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:47.883 surplus_hugepages=0 00:02:47.883 13:31:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:47.883 anon_hugepages=0 00:02:47.883 13:31:50 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:47.883 13:31:50 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:47.883 13:31:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:47.883 13:31:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:47.883 13:31:50 -- setup/common.sh@18 -- # local node= 00:02:47.883 13:31:50 -- setup/common.sh@19 -- # local var val 00:02:47.883 13:31:50 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.883 13:31:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.883 13:31:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.883 13:31:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.883 13:31:50 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.883 13:31:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 26692416 kB' 'MemAvailable: 30439108 kB' 'Buffers: 2696 kB' 'Cached: 11787024 kB' 'SwapCached: 0 kB' 'Active: 8783316 kB' 'Inactive: 3494528 kB' 'Active(anon): 8213020 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491412 kB' 'Mapped: 177204 kB' 'Shmem: 7724896 kB' 'KReclaimable: 187812 kB' 'Slab: 539656 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 351844 kB' 'KernelStack: 12640 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829072 kB' 'Committed_AS: 9338108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195944 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.883 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.883 13:31:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.884 13:31:50 -- setup/common.sh@33 -- # echo 1536 00:02:47.884 13:31:50 -- setup/common.sh@33 -- # return 0 00:02:47.884 13:31:50 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:47.884 13:31:50 -- setup/hugepages.sh@112 -- # get_nodes 00:02:47.884 13:31:50 -- setup/hugepages.sh@27 -- # local node 00:02:47.884 13:31:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.884 13:31:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:47.884 13:31:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.884 13:31:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:47.884 13:31:50 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:47.884 13:31:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:47.884 13:31:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:47.884 13:31:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:47.884 13:31:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:47.884 13:31:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.884 13:31:50 -- setup/common.sh@18 -- # local node=0 00:02:47.884 13:31:50 -- setup/common.sh@19 -- # local var val 00:02:47.884 13:31:50 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.884 13:31:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.884 13:31:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:47.884 13:31:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:47.884 13:31:50 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.884 13:31:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12822088 kB' 'MemUsed: 11797324 kB' 'SwapCached: 0 kB' 'Active: 6627364 kB' 'Inactive: 3250928 kB' 'Active(anon): 6282112 kB' 'Inactive(anon): 0 kB' 'Active(file): 345252 kB' 'Inactive(file): 3250928 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9679648 kB' 'Mapped: 115868 kB' 'AnonPages: 201812 kB' 'Shmem: 6083468 kB' 'KernelStack: 7880 kB' 'PageTables: 5144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105756 kB' 'Slab: 277224 kB' 'SReclaimable: 105756 kB' 'SUnreclaim: 171468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@33 -- # echo 0 00:02:47.884 13:31:50 -- setup/common.sh@33 -- # return 0 00:02:47.884 13:31:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:47.884 13:31:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:47.884 13:31:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:47.884 13:31:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:47.884 13:31:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.884 13:31:50 -- setup/common.sh@18 -- # local node=1 00:02:47.884 13:31:50 -- setup/common.sh@19 -- # local var val 00:02:47.884 13:31:50 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.884 13:31:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.884 13:31:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:47.884 13:31:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:47.884 13:31:50 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.884 13:31:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407256 kB' 'MemFree: 13870104 kB' 'MemUsed: 5537152 kB' 'SwapCached: 0 kB' 'Active: 2150652 kB' 'Inactive: 243600 kB' 'Active(anon): 1925608 kB' 'Inactive(anon): 0 kB' 'Active(file): 225044 kB' 'Inactive(file): 243600 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2110076 kB' 'Mapped: 60900 kB' 'AnonPages: 284256 kB' 'Shmem: 1641432 kB' 'KernelStack: 4728 kB' 'PageTables: 2692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82056 kB' 'Slab: 262436 kB' 'SReclaimable: 82056 kB' 'SUnreclaim: 180380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.884 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.884 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # continue 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.885 13:31:50 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.885 13:31:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.885 13:31:50 -- setup/common.sh@33 -- # echo 0 00:02:47.885 13:31:50 -- setup/common.sh@33 -- # return 0 00:02:47.885 13:31:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:47.885 13:31:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:47.885 13:31:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:47.885 13:31:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:47.885 13:31:50 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:47.885 node0=512 expecting 512 00:02:47.885 13:31:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:47.885 13:31:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:47.885 13:31:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:47.885 13:31:50 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:47.885 node1=1024 expecting 1024 00:02:47.885 13:31:50 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:47.885 00:02:47.885 real 0m1.375s 00:02:47.885 user 0m0.594s 00:02:47.885 sys 0m0.754s 00:02:47.885 13:31:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:47.885 13:31:50 -- common/autotest_common.sh@10 -- # set +x 00:02:47.885 ************************************ 00:02:47.885 END TEST custom_alloc 00:02:47.885 ************************************ 00:02:48.143 13:31:50 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:48.143 13:31:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:48.143 13:31:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:48.143 13:31:50 -- common/autotest_common.sh@10 -- # set +x 00:02:48.143 ************************************ 00:02:48.143 START TEST no_shrink_alloc 00:02:48.143 ************************************ 00:02:48.143 13:31:50 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:02:48.143 13:31:50 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:48.143 13:31:50 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:48.143 13:31:50 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:48.143 13:31:50 -- setup/hugepages.sh@51 -- # shift 00:02:48.143 13:31:50 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:48.143 13:31:50 -- setup/hugepages.sh@52 -- # local node_ids 00:02:48.143 13:31:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:48.144 13:31:50 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:48.144 13:31:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:48.144 13:31:50 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:48.144 13:31:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:48.144 13:31:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:48.144 13:31:50 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:48.144 13:31:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:48.144 13:31:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:48.144 13:31:50 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:48.144 13:31:50 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:48.144 13:31:50 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:48.144 13:31:50 -- setup/hugepages.sh@73 -- # return 0 00:02:48.144 13:31:50 -- setup/hugepages.sh@198 -- # setup output 00:02:48.144 13:31:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.144 13:31:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:49.522 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:49.522 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:49.522 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:49.522 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:49.522 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:49.522 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:49.522 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:49.522 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:49.522 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:49.522 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:49.522 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:49.522 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:49.522 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:49.522 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:49.522 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:49.522 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:49.522 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:49.522 13:31:52 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:49.522 13:31:52 -- setup/hugepages.sh@89 -- # local node 00:02:49.522 13:31:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:49.522 13:31:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:49.522 13:31:52 -- setup/hugepages.sh@92 -- # local surp 00:02:49.522 13:31:52 -- setup/hugepages.sh@93 -- # local resv 00:02:49.522 13:31:52 -- setup/hugepages.sh@94 -- # local anon 00:02:49.522 13:31:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:49.522 13:31:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:49.522 13:31:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:49.522 13:31:52 -- setup/common.sh@18 -- # local node= 00:02:49.522 13:31:52 -- setup/common.sh@19 -- # local var val 00:02:49.522 13:31:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.522 13:31:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.522 13:31:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.522 13:31:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.522 13:31:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.522 13:31:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.522 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.522 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.522 13:31:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27666768 kB' 'MemAvailable: 31413460 kB' 'Buffers: 2696 kB' 'Cached: 11787096 kB' 'SwapCached: 0 kB' 'Active: 8777608 kB' 'Inactive: 3494528 kB' 'Active(anon): 8207312 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485640 kB' 'Mapped: 176832 kB' 'Shmem: 7724968 kB' 'KReclaimable: 187812 kB' 'Slab: 539608 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 351796 kB' 'KernelStack: 12592 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9331808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195972 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:49.522 13:31:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.522 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.522 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.522 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.522 13:31:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.522 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.522 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.522 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.522 13:31:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.522 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.522 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.522 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.522 13:31:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.522 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.522 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.522 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.522 13:31:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.523 13:31:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.523 13:31:52 -- setup/common.sh@33 -- # echo 0 00:02:49.523 13:31:52 -- setup/common.sh@33 -- # return 0 00:02:49.523 13:31:52 -- setup/hugepages.sh@97 -- # anon=0 00:02:49.523 13:31:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:49.523 13:31:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.523 13:31:52 -- setup/common.sh@18 -- # local node= 00:02:49.523 13:31:52 -- setup/common.sh@19 -- # local var val 00:02:49.523 13:31:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.523 13:31:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.523 13:31:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.523 13:31:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.523 13:31:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.523 13:31:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.523 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27666632 kB' 'MemAvailable: 31413324 kB' 'Buffers: 2696 kB' 'Cached: 11787100 kB' 'SwapCached: 0 kB' 'Active: 8778192 kB' 'Inactive: 3494528 kB' 'Active(anon): 8207896 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486236 kB' 'Mapped: 176884 kB' 'Shmem: 7724972 kB' 'KReclaimable: 187812 kB' 'Slab: 539604 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 351792 kB' 'KernelStack: 12608 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9331820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195924 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.524 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.524 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.525 13:31:52 -- setup/common.sh@33 -- # echo 0 00:02:49.525 13:31:52 -- setup/common.sh@33 -- # return 0 00:02:49.525 13:31:52 -- setup/hugepages.sh@99 -- # surp=0 00:02:49.525 13:31:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:49.525 13:31:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:49.525 13:31:52 -- setup/common.sh@18 -- # local node= 00:02:49.525 13:31:52 -- setup/common.sh@19 -- # local var val 00:02:49.525 13:31:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.525 13:31:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.525 13:31:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.525 13:31:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.525 13:31:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.525 13:31:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27667484 kB' 'MemAvailable: 31414176 kB' 'Buffers: 2696 kB' 'Cached: 11787112 kB' 'SwapCached: 0 kB' 'Active: 8777052 kB' 'Inactive: 3494528 kB' 'Active(anon): 8206756 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484956 kB' 'Mapped: 176792 kB' 'Shmem: 7724984 kB' 'KReclaimable: 187812 kB' 'Slab: 539564 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 351752 kB' 'KernelStack: 12576 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9331836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195940 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.525 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.525 13:31:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.526 13:31:52 -- setup/common.sh@33 -- # echo 0 00:02:49.526 13:31:52 -- setup/common.sh@33 -- # return 0 00:02:49.526 13:31:52 -- setup/hugepages.sh@100 -- # resv=0 00:02:49.526 13:31:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:49.526 nr_hugepages=1024 00:02:49.526 13:31:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:49.526 resv_hugepages=0 00:02:49.526 13:31:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:49.526 surplus_hugepages=0 00:02:49.526 13:31:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:49.526 anon_hugepages=0 00:02:49.526 13:31:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.526 13:31:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:49.526 13:31:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:49.526 13:31:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:49.526 13:31:52 -- setup/common.sh@18 -- # local node= 00:02:49.526 13:31:52 -- setup/common.sh@19 -- # local var val 00:02:49.526 13:31:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.526 13:31:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.526 13:31:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.526 13:31:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.526 13:31:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.526 13:31:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27667484 kB' 'MemAvailable: 31414176 kB' 'Buffers: 2696 kB' 'Cached: 11787124 kB' 'SwapCached: 0 kB' 'Active: 8777296 kB' 'Inactive: 3494528 kB' 'Active(anon): 8207000 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485164 kB' 'Mapped: 176792 kB' 'Shmem: 7724996 kB' 'KReclaimable: 187812 kB' 'Slab: 539564 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 351752 kB' 'KernelStack: 12576 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9331856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195956 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.526 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.526 13:31:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.527 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.527 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.528 13:31:52 -- setup/common.sh@33 -- # echo 1024 00:02:49.528 13:31:52 -- setup/common.sh@33 -- # return 0 00:02:49.528 13:31:52 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.528 13:31:52 -- setup/hugepages.sh@112 -- # get_nodes 00:02:49.528 13:31:52 -- setup/hugepages.sh@27 -- # local node 00:02:49.528 13:31:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.528 13:31:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:49.528 13:31:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.528 13:31:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:49.528 13:31:52 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:49.528 13:31:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:49.528 13:31:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.528 13:31:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.528 13:31:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:49.528 13:31:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.528 13:31:52 -- setup/common.sh@18 -- # local node=0 00:02:49.528 13:31:52 -- setup/common.sh@19 -- # local var val 00:02:49.528 13:31:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.528 13:31:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.528 13:31:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:49.528 13:31:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:49.528 13:31:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.528 13:31:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 11770792 kB' 'MemUsed: 12848620 kB' 'SwapCached: 0 kB' 'Active: 6627072 kB' 'Inactive: 3250928 kB' 'Active(anon): 6281820 kB' 'Inactive(anon): 0 kB' 'Active(file): 345252 kB' 'Inactive(file): 3250928 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9679756 kB' 'Mapped: 115896 kB' 'AnonPages: 201428 kB' 'Shmem: 6083576 kB' 'KernelStack: 7848 kB' 'PageTables: 5000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105756 kB' 'Slab: 277164 kB' 'SReclaimable: 105756 kB' 'SUnreclaim: 171408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 13:31:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # continue 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 13:31:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 13:31:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.529 13:31:52 -- setup/common.sh@33 -- # echo 0 00:02:49.529 13:31:52 -- setup/common.sh@33 -- # return 0 00:02:49.529 13:31:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.529 13:31:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.529 13:31:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.529 13:31:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.529 13:31:52 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:49.529 node0=1024 expecting 1024 00:02:49.529 13:31:52 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:49.529 13:31:52 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:49.529 13:31:52 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:49.529 13:31:52 -- setup/hugepages.sh@202 -- # setup output 00:02:49.529 13:31:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.529 13:31:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:50.909 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:50.909 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:50.909 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:50.909 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:50.909 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:50.909 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:50.909 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:50.909 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:50.909 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:50.909 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:50.909 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:50.909 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:50.909 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:50.909 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:50.909 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:50.909 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:50.909 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:50.909 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:50.909 13:31:53 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:50.909 13:31:53 -- setup/hugepages.sh@89 -- # local node 00:02:50.909 13:31:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:50.909 13:31:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:50.909 13:31:53 -- setup/hugepages.sh@92 -- # local surp 00:02:50.909 13:31:53 -- setup/hugepages.sh@93 -- # local resv 00:02:50.909 13:31:53 -- setup/hugepages.sh@94 -- # local anon 00:02:50.909 13:31:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:50.909 13:31:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:50.909 13:31:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:50.909 13:31:53 -- setup/common.sh@18 -- # local node= 00:02:50.909 13:31:53 -- setup/common.sh@19 -- # local var val 00:02:50.909 13:31:53 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.909 13:31:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.909 13:31:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.909 13:31:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.909 13:31:53 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.909 13:31:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.909 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.909 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27653080 kB' 'MemAvailable: 31399772 kB' 'Buffers: 2696 kB' 'Cached: 11787172 kB' 'SwapCached: 0 kB' 'Active: 8778212 kB' 'Inactive: 3494528 kB' 'Active(anon): 8207916 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486156 kB' 'Mapped: 176944 kB' 'Shmem: 7725044 kB' 'KReclaimable: 187812 kB' 'Slab: 539904 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352092 kB' 'KernelStack: 12608 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9332380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.910 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.910 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.911 13:31:53 -- setup/common.sh@33 -- # echo 0 00:02:50.911 13:31:53 -- setup/common.sh@33 -- # return 0 00:02:50.911 13:31:53 -- setup/hugepages.sh@97 -- # anon=0 00:02:50.911 13:31:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:50.911 13:31:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.911 13:31:53 -- setup/common.sh@18 -- # local node= 00:02:50.911 13:31:53 -- setup/common.sh@19 -- # local var val 00:02:50.911 13:31:53 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.911 13:31:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.911 13:31:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.911 13:31:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.911 13:31:53 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.911 13:31:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27653048 kB' 'MemAvailable: 31399740 kB' 'Buffers: 2696 kB' 'Cached: 11787172 kB' 'SwapCached: 0 kB' 'Active: 8778724 kB' 'Inactive: 3494528 kB' 'Active(anon): 8208428 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486612 kB' 'Mapped: 176944 kB' 'Shmem: 7725044 kB' 'KReclaimable: 187812 kB' 'Slab: 539884 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352072 kB' 'KernelStack: 12640 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9332392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196004 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.911 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.911 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.912 13:31:53 -- setup/common.sh@33 -- # echo 0 00:02:50.912 13:31:53 -- setup/common.sh@33 -- # return 0 00:02:50.912 13:31:53 -- setup/hugepages.sh@99 -- # surp=0 00:02:50.912 13:31:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:50.912 13:31:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:50.912 13:31:53 -- setup/common.sh@18 -- # local node= 00:02:50.912 13:31:53 -- setup/common.sh@19 -- # local var val 00:02:50.912 13:31:53 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.912 13:31:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.912 13:31:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.912 13:31:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.912 13:31:53 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.912 13:31:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27653256 kB' 'MemAvailable: 31399948 kB' 'Buffers: 2696 kB' 'Cached: 11787176 kB' 'SwapCached: 0 kB' 'Active: 8778428 kB' 'Inactive: 3494528 kB' 'Active(anon): 8208132 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486328 kB' 'Mapped: 176880 kB' 'Shmem: 7725048 kB' 'KReclaimable: 187812 kB' 'Slab: 539916 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352104 kB' 'KernelStack: 12672 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9332408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196004 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.912 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.912 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.913 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.913 13:31:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.914 13:31:53 -- setup/common.sh@33 -- # echo 0 00:02:50.914 13:31:53 -- setup/common.sh@33 -- # return 0 00:02:50.914 13:31:53 -- setup/hugepages.sh@100 -- # resv=0 00:02:50.914 13:31:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:50.914 nr_hugepages=1024 00:02:50.914 13:31:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:50.914 resv_hugepages=0 00:02:50.914 13:31:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:50.914 surplus_hugepages=0 00:02:50.914 13:31:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:50.914 anon_hugepages=0 00:02:50.914 13:31:53 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:50.914 13:31:53 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:50.914 13:31:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:50.914 13:31:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:50.914 13:31:53 -- setup/common.sh@18 -- # local node= 00:02:50.914 13:31:53 -- setup/common.sh@19 -- # local var val 00:02:50.914 13:31:53 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.914 13:31:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.914 13:31:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.914 13:31:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.914 13:31:53 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.914 13:31:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.914 13:31:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026668 kB' 'MemFree: 27653504 kB' 'MemAvailable: 31400196 kB' 'Buffers: 2696 kB' 'Cached: 11787200 kB' 'SwapCached: 0 kB' 'Active: 8777580 kB' 'Inactive: 3494528 kB' 'Active(anon): 8207284 kB' 'Inactive(anon): 0 kB' 'Active(file): 570296 kB' 'Inactive(file): 3494528 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485408 kB' 'Mapped: 176800 kB' 'Shmem: 7725072 kB' 'KReclaimable: 187812 kB' 'Slab: 539876 kB' 'SReclaimable: 187812 kB' 'SUnreclaim: 352064 kB' 'KernelStack: 12624 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353360 kB' 'Committed_AS: 9332424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196004 kB' 'VmallocChunk: 0 kB' 'Percpu: 33600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1754716 kB' 'DirectMap2M: 15990784 kB' 'DirectMap1G: 34603008 kB' 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.914 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.914 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.915 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.915 13:31:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.915 13:31:53 -- setup/common.sh@33 -- # echo 1024 00:02:50.915 13:31:53 -- setup/common.sh@33 -- # return 0 00:02:50.915 13:31:53 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:50.915 13:31:53 -- setup/hugepages.sh@112 -- # get_nodes 00:02:50.915 13:31:53 -- setup/hugepages.sh@27 -- # local node 00:02:50.915 13:31:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.915 13:31:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:50.915 13:31:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.916 13:31:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:50.916 13:31:53 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:50.916 13:31:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:50.916 13:31:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.916 13:31:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.916 13:31:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:50.916 13:31:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.916 13:31:53 -- setup/common.sh@18 -- # local node=0 00:02:50.916 13:31:53 -- setup/common.sh@19 -- # local var val 00:02:50.916 13:31:53 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.916 13:31:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.916 13:31:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:50.916 13:31:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:50.916 13:31:53 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.916 13:31:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 11759320 kB' 'MemUsed: 12860092 kB' 'SwapCached: 0 kB' 'Active: 6626236 kB' 'Inactive: 3250928 kB' 'Active(anon): 6280984 kB' 'Inactive(anon): 0 kB' 'Active(file): 345252 kB' 'Inactive(file): 3250928 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9679820 kB' 'Mapped: 115904 kB' 'AnonPages: 200452 kB' 'Shmem: 6083640 kB' 'KernelStack: 7912 kB' 'PageTables: 5052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105756 kB' 'Slab: 277304 kB' 'SReclaimable: 105756 kB' 'SUnreclaim: 171548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.916 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.916 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.917 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.917 13:31:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.917 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.917 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.917 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.917 13:31:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.917 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.917 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.917 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.917 13:31:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.917 13:31:53 -- setup/common.sh@32 -- # continue 00:02:50.917 13:31:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.917 13:31:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.917 13:31:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.917 13:31:53 -- setup/common.sh@33 -- # echo 0 00:02:50.917 13:31:53 -- setup/common.sh@33 -- # return 0 00:02:50.917 13:31:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.917 13:31:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.917 13:31:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.917 13:31:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.917 13:31:53 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:50.917 node0=1024 expecting 1024 00:02:50.917 13:31:53 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:50.917 00:02:50.917 real 0m2.764s 00:02:50.917 user 0m1.189s 00:02:50.917 sys 0m1.523s 00:02:50.917 13:31:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:50.917 13:31:53 -- common/autotest_common.sh@10 -- # set +x 00:02:50.917 ************************************ 00:02:50.917 END TEST no_shrink_alloc 00:02:50.917 ************************************ 00:02:50.917 13:31:53 -- setup/hugepages.sh@217 -- # clear_hp 00:02:50.917 13:31:53 -- setup/hugepages.sh@37 -- # local node hp 00:02:50.917 13:31:53 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:50.917 13:31:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:50.917 13:31:53 -- setup/hugepages.sh@41 -- # echo 0 00:02:50.917 13:31:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:50.917 13:31:53 -- setup/hugepages.sh@41 -- # echo 0 00:02:50.917 13:31:53 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:50.917 13:31:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:50.917 13:31:53 -- setup/hugepages.sh@41 -- # echo 0 00:02:50.917 13:31:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:50.917 13:31:53 -- setup/hugepages.sh@41 -- # echo 0 00:02:50.917 13:31:53 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:50.917 13:31:53 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:50.917 00:02:50.917 real 0m11.350s 00:02:50.917 user 0m4.371s 00:02:50.917 sys 0m5.779s 00:02:50.917 13:31:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:50.917 13:31:53 -- common/autotest_common.sh@10 -- # set +x 00:02:50.917 ************************************ 00:02:50.917 END TEST hugepages 00:02:50.917 ************************************ 00:02:50.917 13:31:53 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:50.917 13:31:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:50.917 13:31:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:50.917 13:31:53 -- common/autotest_common.sh@10 -- # set +x 00:02:50.917 ************************************ 00:02:50.917 START TEST driver 00:02:50.917 ************************************ 00:02:50.917 13:31:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:51.176 * Looking for test storage... 00:02:51.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:51.176 13:31:53 -- setup/driver.sh@68 -- # setup reset 00:02:51.176 13:31:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:51.176 13:31:53 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.718 13:31:56 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:53.718 13:31:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:53.718 13:31:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:53.718 13:31:56 -- common/autotest_common.sh@10 -- # set +x 00:02:53.718 ************************************ 00:02:53.718 START TEST guess_driver 00:02:53.718 ************************************ 00:02:53.718 13:31:56 -- common/autotest_common.sh@1111 -- # guess_driver 00:02:53.718 13:31:56 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:53.718 13:31:56 -- setup/driver.sh@47 -- # local fail=0 00:02:53.718 13:31:56 -- setup/driver.sh@49 -- # pick_driver 00:02:53.718 13:31:56 -- setup/driver.sh@36 -- # vfio 00:02:53.718 13:31:56 -- setup/driver.sh@21 -- # local iommu_grups 00:02:53.718 13:31:56 -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:53.718 13:31:56 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:53.718 13:31:56 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:53.718 13:31:56 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:53.718 13:31:56 -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:02:53.718 13:31:56 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:53.719 13:31:56 -- setup/driver.sh@14 -- # mod vfio_pci 00:02:53.719 13:31:56 -- setup/driver.sh@12 -- # dep vfio_pci 00:02:53.719 13:31:56 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:53.719 13:31:56 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:53.719 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:53.719 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:53.719 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:53.719 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:53.719 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:53.719 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:53.719 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:53.719 13:31:56 -- setup/driver.sh@30 -- # return 0 00:02:53.719 13:31:56 -- setup/driver.sh@37 -- # echo vfio-pci 00:02:53.719 13:31:56 -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:53.719 13:31:56 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:53.719 13:31:56 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:53.719 Looking for driver=vfio-pci 00:02:53.719 13:31:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.719 13:31:56 -- setup/driver.sh@45 -- # setup output config 00:02:53.719 13:31:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.719 13:31:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:54.653 13:31:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.653 13:31:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.653 13:31:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.653 13:31:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.653 13:31:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.653 13:31:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.653 13:31:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.653 13:31:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.653 13:31:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.653 13:31:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.653 13:31:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.653 13:31:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.653 13:31:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.653 13:31:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.653 13:31:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.653 13:31:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.653 13:31:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.653 13:31:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.653 13:31:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.653 13:31:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.653 13:31:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.653 13:31:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.653 13:31:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.653 13:31:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.653 13:31:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.653 13:31:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.653 13:31:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.653 13:31:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.653 13:31:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.653 13:31:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.653 13:31:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.653 13:31:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.653 13:31:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.653 13:31:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.653 13:31:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.653 13:31:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.653 13:31:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.653 13:31:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.653 13:31:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.653 13:31:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.653 13:31:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.653 13:31:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.654 13:31:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.654 13:31:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.654 13:31:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.654 13:31:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.654 13:31:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.654 13:31:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.588 13:31:58 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:55.588 13:31:58 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:55.588 13:31:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:55.846 13:31:58 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:55.846 13:31:58 -- setup/driver.sh@65 -- # setup reset 00:02:55.846 13:31:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:55.846 13:31:58 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.374 00:02:58.374 real 0m4.706s 00:02:58.374 user 0m1.089s 00:02:58.374 sys 0m1.768s 00:02:58.374 13:32:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:58.374 13:32:00 -- common/autotest_common.sh@10 -- # set +x 00:02:58.374 ************************************ 00:02:58.374 END TEST guess_driver 00:02:58.374 ************************************ 00:02:58.374 00:02:58.374 real 0m7.136s 00:02:58.374 user 0m1.645s 00:02:58.374 sys 0m2.780s 00:02:58.374 13:32:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:58.374 13:32:00 -- common/autotest_common.sh@10 -- # set +x 00:02:58.374 ************************************ 00:02:58.374 END TEST driver 00:02:58.374 ************************************ 00:02:58.374 13:32:00 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:58.374 13:32:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:58.374 13:32:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:58.374 13:32:00 -- common/autotest_common.sh@10 -- # set +x 00:02:58.374 ************************************ 00:02:58.374 START TEST devices 00:02:58.374 ************************************ 00:02:58.374 13:32:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:58.374 * Looking for test storage... 00:02:58.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:58.374 13:32:01 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:58.374 13:32:01 -- setup/devices.sh@192 -- # setup reset 00:02:58.374 13:32:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:58.374 13:32:01 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.750 13:32:02 -- setup/devices.sh@194 -- # get_zoned_devs 00:02:59.750 13:32:02 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:59.750 13:32:02 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:59.750 13:32:02 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:59.750 13:32:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:59.750 13:32:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:59.750 13:32:02 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:59.750 13:32:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:59.750 13:32:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:59.750 13:32:02 -- setup/devices.sh@196 -- # blocks=() 00:02:59.750 13:32:02 -- setup/devices.sh@196 -- # declare -a blocks 00:02:59.750 13:32:02 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:02:59.750 13:32:02 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:02:59.750 13:32:02 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:02:59.750 13:32:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:02:59.750 13:32:02 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:02:59.750 13:32:02 -- setup/devices.sh@201 -- # ctrl=nvme0 00:02:59.750 13:32:02 -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:02:59.750 13:32:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:02:59.750 13:32:02 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:02:59.750 13:32:02 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:02:59.750 13:32:02 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:02:59.750 No valid GPT data, bailing 00:02:59.750 13:32:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:59.750 13:32:02 -- scripts/common.sh@391 -- # pt= 00:02:59.750 13:32:02 -- scripts/common.sh@392 -- # return 1 00:02:59.750 13:32:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:02:59.750 13:32:02 -- setup/common.sh@76 -- # local dev=nvme0n1 00:02:59.750 13:32:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:02:59.750 13:32:02 -- setup/common.sh@80 -- # echo 1000204886016 00:02:59.750 13:32:02 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:02:59.750 13:32:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:02:59.750 13:32:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:02:59.750 13:32:02 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:02:59.750 13:32:02 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:02:59.750 13:32:02 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:02:59.750 13:32:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:59.750 13:32:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:59.750 13:32:02 -- common/autotest_common.sh@10 -- # set +x 00:03:00.008 ************************************ 00:03:00.008 START TEST nvme_mount 00:03:00.008 ************************************ 00:03:00.008 13:32:02 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:00.008 13:32:02 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:00.008 13:32:02 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:00.008 13:32:02 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:00.008 13:32:02 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:00.008 13:32:02 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:00.008 13:32:02 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:00.008 13:32:02 -- setup/common.sh@40 -- # local part_no=1 00:03:00.009 13:32:02 -- setup/common.sh@41 -- # local size=1073741824 00:03:00.009 13:32:02 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:00.009 13:32:02 -- setup/common.sh@44 -- # parts=() 00:03:00.009 13:32:02 -- setup/common.sh@44 -- # local parts 00:03:00.009 13:32:02 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:00.009 13:32:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:00.009 13:32:02 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:00.009 13:32:02 -- setup/common.sh@46 -- # (( part++ )) 00:03:00.009 13:32:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:00.009 13:32:02 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:00.009 13:32:02 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:00.009 13:32:02 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:00.945 Creating new GPT entries in memory. 00:03:00.945 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:00.945 other utilities. 00:03:00.945 13:32:03 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:00.945 13:32:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:00.945 13:32:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:00.945 13:32:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:00.945 13:32:03 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:01.878 Creating new GPT entries in memory. 00:03:01.878 The operation has completed successfully. 00:03:01.878 13:32:04 -- setup/common.sh@57 -- # (( part++ )) 00:03:01.878 13:32:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:01.878 13:32:04 -- setup/common.sh@62 -- # wait 2472829 00:03:01.878 13:32:04 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:01.878 13:32:04 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:01.878 13:32:04 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:01.878 13:32:04 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:01.878 13:32:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:01.878 13:32:04 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:01.878 13:32:04 -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:01.878 13:32:04 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:01.878 13:32:04 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:01.878 13:32:04 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:01.878 13:32:04 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:01.878 13:32:04 -- setup/devices.sh@53 -- # local found=0 00:03:01.878 13:32:04 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:01.878 13:32:04 -- setup/devices.sh@56 -- # : 00:03:01.878 13:32:04 -- setup/devices.sh@59 -- # local pci status 00:03:01.878 13:32:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.878 13:32:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:01.878 13:32:04 -- setup/devices.sh@47 -- # setup output config 00:03:01.878 13:32:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.878 13:32:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:03.254 13:32:05 -- setup/devices.sh@63 -- # found=1 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.254 13:32:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:03.254 13:32:05 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:03.254 13:32:05 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.254 13:32:05 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:03.254 13:32:05 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:03.254 13:32:05 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:03.254 13:32:05 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.254 13:32:05 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.254 13:32:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:03.254 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:03.254 13:32:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:03.254 13:32:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:03.513 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:03.513 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:03.513 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:03.513 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:03.513 13:32:06 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:03.513 13:32:06 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:03.513 13:32:06 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.513 13:32:06 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:03.513 13:32:06 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:03.513 13:32:06 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.513 13:32:06 -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:03.513 13:32:06 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:03.513 13:32:06 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:03.513 13:32:06 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.513 13:32:06 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:03.513 13:32:06 -- setup/devices.sh@53 -- # local found=0 00:03:03.513 13:32:06 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:03.513 13:32:06 -- setup/devices.sh@56 -- # : 00:03:03.513 13:32:06 -- setup/devices.sh@59 -- # local pci status 00:03:03.513 13:32:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.513 13:32:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:03.513 13:32:06 -- setup/devices.sh@47 -- # setup output config 00:03:03.513 13:32:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.513 13:32:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:04.482 13:32:07 -- setup/devices.sh@63 -- # found=1 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.482 13:32:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:04.482 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.741 13:32:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:04.741 13:32:07 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:04.741 13:32:07 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.741 13:32:07 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:04.741 13:32:07 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:04.741 13:32:07 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.741 13:32:07 -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:03:04.741 13:32:07 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:04.741 13:32:07 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:04.741 13:32:07 -- setup/devices.sh@50 -- # local mount_point= 00:03:04.741 13:32:07 -- setup/devices.sh@51 -- # local test_file= 00:03:04.741 13:32:07 -- setup/devices.sh@53 -- # local found=0 00:03:04.741 13:32:07 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:04.741 13:32:07 -- setup/devices.sh@59 -- # local pci status 00:03:04.741 13:32:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.741 13:32:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:04.741 13:32:07 -- setup/devices.sh@47 -- # setup output config 00:03:04.741 13:32:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.741 13:32:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:06.116 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.116 13:32:08 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:06.116 13:32:08 -- setup/devices.sh@63 -- # found=1 00:03:06.116 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.116 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.116 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.116 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.116 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.116 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.116 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.116 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.116 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.116 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.116 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.116 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.116 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.116 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.116 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.116 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.116 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.116 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.116 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.116 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.116 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.116 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.116 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.116 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.116 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.117 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.117 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.117 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.117 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.117 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.117 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.117 13:32:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:06.117 13:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.117 13:32:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:06.117 13:32:08 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:06.117 13:32:08 -- setup/devices.sh@68 -- # return 0 00:03:06.117 13:32:08 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:06.117 13:32:08 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.117 13:32:08 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:06.117 13:32:08 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:06.117 13:32:08 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:06.117 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:06.117 00:03:06.117 real 0m6.071s 00:03:06.117 user 0m1.402s 00:03:06.117 sys 0m2.288s 00:03:06.117 13:32:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:06.117 13:32:08 -- common/autotest_common.sh@10 -- # set +x 00:03:06.117 ************************************ 00:03:06.117 END TEST nvme_mount 00:03:06.117 ************************************ 00:03:06.117 13:32:08 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:06.117 13:32:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:06.117 13:32:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:06.117 13:32:08 -- common/autotest_common.sh@10 -- # set +x 00:03:06.117 ************************************ 00:03:06.117 START TEST dm_mount 00:03:06.117 ************************************ 00:03:06.117 13:32:08 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:06.117 13:32:08 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:06.117 13:32:08 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:06.117 13:32:08 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:06.117 13:32:08 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:06.117 13:32:08 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:06.117 13:32:08 -- setup/common.sh@40 -- # local part_no=2 00:03:06.117 13:32:08 -- setup/common.sh@41 -- # local size=1073741824 00:03:06.117 13:32:08 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:06.117 13:32:08 -- setup/common.sh@44 -- # parts=() 00:03:06.117 13:32:08 -- setup/common.sh@44 -- # local parts 00:03:06.117 13:32:08 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:06.117 13:32:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:06.117 13:32:08 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:06.117 13:32:08 -- setup/common.sh@46 -- # (( part++ )) 00:03:06.117 13:32:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:06.117 13:32:08 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:06.117 13:32:08 -- setup/common.sh@46 -- # (( part++ )) 00:03:06.117 13:32:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:06.117 13:32:08 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:06.117 13:32:08 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:06.117 13:32:08 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:07.052 Creating new GPT entries in memory. 00:03:07.052 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:07.052 other utilities. 00:03:07.052 13:32:09 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:07.052 13:32:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:07.052 13:32:09 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:07.052 13:32:09 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:07.052 13:32:09 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:07.985 Creating new GPT entries in memory. 00:03:07.985 The operation has completed successfully. 00:03:07.985 13:32:10 -- setup/common.sh@57 -- # (( part++ )) 00:03:07.985 13:32:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:07.985 13:32:10 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:07.985 13:32:10 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:07.985 13:32:10 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:09.365 The operation has completed successfully. 00:03:09.365 13:32:11 -- setup/common.sh@57 -- # (( part++ )) 00:03:09.365 13:32:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:09.365 13:32:11 -- setup/common.sh@62 -- # wait 2475238 00:03:09.365 13:32:11 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:09.365 13:32:11 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:09.365 13:32:11 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:09.365 13:32:11 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:09.365 13:32:11 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:09.365 13:32:11 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:09.365 13:32:11 -- setup/devices.sh@161 -- # break 00:03:09.365 13:32:11 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:09.365 13:32:11 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:09.365 13:32:11 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:09.365 13:32:11 -- setup/devices.sh@166 -- # dm=dm-0 00:03:09.365 13:32:11 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:09.365 13:32:11 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:09.365 13:32:11 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:09.365 13:32:11 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:09.365 13:32:11 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:09.365 13:32:11 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:09.365 13:32:11 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:09.365 13:32:11 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:09.365 13:32:11 -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:09.365 13:32:11 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:09.365 13:32:11 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:09.365 13:32:11 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:09.365 13:32:11 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:09.365 13:32:11 -- setup/devices.sh@53 -- # local found=0 00:03:09.365 13:32:11 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:09.365 13:32:11 -- setup/devices.sh@56 -- # : 00:03:09.365 13:32:11 -- setup/devices.sh@59 -- # local pci status 00:03:09.365 13:32:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.365 13:32:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:09.365 13:32:11 -- setup/devices.sh@47 -- # setup output config 00:03:09.365 13:32:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.365 13:32:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:10.299 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.299 13:32:12 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:10.299 13:32:12 -- setup/devices.sh@63 -- # found=1 00:03:10.299 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.299 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.299 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.299 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.299 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.299 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.299 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.299 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.299 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.299 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.299 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.299 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.299 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.299 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.299 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.299 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.299 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.299 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.299 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.299 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.299 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.299 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.299 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.299 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.299 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.299 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.300 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.300 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.300 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.300 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.300 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.300 13:32:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:10.300 13:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.557 13:32:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:10.557 13:32:13 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:10.557 13:32:13 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:10.557 13:32:13 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:10.557 13:32:13 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:10.557 13:32:13 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:10.557 13:32:13 -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:10.557 13:32:13 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:10.557 13:32:13 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:10.557 13:32:13 -- setup/devices.sh@50 -- # local mount_point= 00:03:10.557 13:32:13 -- setup/devices.sh@51 -- # local test_file= 00:03:10.557 13:32:13 -- setup/devices.sh@53 -- # local found=0 00:03:10.557 13:32:13 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:10.557 13:32:13 -- setup/devices.sh@59 -- # local pci status 00:03:10.557 13:32:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.557 13:32:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:10.557 13:32:13 -- setup/devices.sh@47 -- # setup output config 00:03:10.557 13:32:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.557 13:32:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:11.491 13:32:14 -- setup/devices.sh@63 -- # found=1 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:11.491 13:32:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.491 13:32:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:11.491 13:32:14 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:11.491 13:32:14 -- setup/devices.sh@68 -- # return 0 00:03:11.491 13:32:14 -- setup/devices.sh@187 -- # cleanup_dm 00:03:11.491 13:32:14 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:11.491 13:32:14 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:11.491 13:32:14 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:11.751 13:32:14 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:11.751 13:32:14 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:11.751 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:11.751 13:32:14 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:11.751 13:32:14 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:11.751 00:03:11.751 real 0m5.585s 00:03:11.751 user 0m0.968s 00:03:11.751 sys 0m1.527s 00:03:11.751 13:32:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:11.751 13:32:14 -- common/autotest_common.sh@10 -- # set +x 00:03:11.751 ************************************ 00:03:11.751 END TEST dm_mount 00:03:11.751 ************************************ 00:03:11.751 13:32:14 -- setup/devices.sh@1 -- # cleanup 00:03:11.751 13:32:14 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:11.751 13:32:14 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:11.751 13:32:14 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:11.751 13:32:14 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:11.751 13:32:14 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:11.751 13:32:14 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:12.010 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:12.010 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:12.010 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:12.010 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:12.010 13:32:14 -- setup/devices.sh@12 -- # cleanup_dm 00:03:12.010 13:32:14 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:12.010 13:32:14 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:12.010 13:32:14 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:12.010 13:32:14 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:12.010 13:32:14 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:12.010 13:32:14 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:12.010 00:03:12.010 real 0m13.681s 00:03:12.010 user 0m3.042s 00:03:12.010 sys 0m4.913s 00:03:12.010 13:32:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:12.010 13:32:14 -- common/autotest_common.sh@10 -- # set +x 00:03:12.010 ************************************ 00:03:12.010 END TEST devices 00:03:12.010 ************************************ 00:03:12.010 00:03:12.010 real 0m42.653s 00:03:12.010 user 0m12.431s 00:03:12.010 sys 0m18.770s 00:03:12.010 13:32:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:12.010 13:32:14 -- common/autotest_common.sh@10 -- # set +x 00:03:12.010 ************************************ 00:03:12.010 END TEST setup.sh 00:03:12.010 ************************************ 00:03:12.010 13:32:14 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:13.387 Hugepages 00:03:13.387 node hugesize free / total 00:03:13.387 node0 1048576kB 0 / 0 00:03:13.387 node0 2048kB 2048 / 2048 00:03:13.387 node1 1048576kB 0 / 0 00:03:13.387 node1 2048kB 0 / 0 00:03:13.387 00:03:13.387 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:13.387 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:13.387 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:13.387 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:13.387 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:13.387 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:13.387 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:13.387 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:13.387 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:13.387 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:13.387 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:13.387 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:13.387 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:13.387 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:13.387 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:13.387 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:13.387 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:13.387 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:13.387 13:32:15 -- spdk/autotest.sh@130 -- # uname -s 00:03:13.387 13:32:15 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:13.387 13:32:15 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:13.387 13:32:15 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.324 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:14.324 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:14.324 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:14.324 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:14.324 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:14.324 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:14.324 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:14.324 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:14.324 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:14.583 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:14.583 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:14.583 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:14.583 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:14.583 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:14.583 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:14.583 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:15.522 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.522 13:32:18 -- common/autotest_common.sh@1518 -- # sleep 1 00:03:16.901 13:32:19 -- common/autotest_common.sh@1519 -- # bdfs=() 00:03:16.901 13:32:19 -- common/autotest_common.sh@1519 -- # local bdfs 00:03:16.901 13:32:19 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:16.901 13:32:19 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:16.901 13:32:19 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:16.901 13:32:19 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:16.901 13:32:19 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:16.901 13:32:19 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:16.901 13:32:19 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:16.901 13:32:19 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:16.901 13:32:19 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:82:00.0 00:03:16.901 13:32:19 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.836 Waiting for block devices as requested 00:03:17.836 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:03:17.836 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:17.836 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:17.836 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:18.095 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:18.095 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:18.095 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:18.095 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:18.355 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:18.355 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:18.355 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:18.355 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:18.613 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:18.613 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:18.613 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:18.613 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:18.902 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:18.902 13:32:21 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:18.902 13:32:21 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:03:18.902 13:32:21 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:03:18.902 13:32:21 -- common/autotest_common.sh@1488 -- # grep 0000:82:00.0/nvme/nvme 00:03:18.902 13:32:21 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:03:18.902 13:32:21 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:03:18.902 13:32:21 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:03:18.902 13:32:21 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:03:18.902 13:32:21 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:18.902 13:32:21 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:18.902 13:32:21 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:18.902 13:32:21 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:18.902 13:32:21 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:18.902 13:32:21 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:18.902 13:32:21 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:18.902 13:32:21 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:18.902 13:32:21 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:18.902 13:32:21 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:18.902 13:32:21 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:18.902 13:32:21 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:18.902 13:32:21 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:18.902 13:32:21 -- common/autotest_common.sh@1543 -- # continue 00:03:18.902 13:32:21 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:18.902 13:32:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:18.902 13:32:21 -- common/autotest_common.sh@10 -- # set +x 00:03:18.902 13:32:21 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:18.902 13:32:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:18.902 13:32:21 -- common/autotest_common.sh@10 -- # set +x 00:03:18.902 13:32:21 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.839 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:19.839 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:19.839 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:19.839 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:19.839 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:19.839 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:19.839 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:19.839 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:19.839 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:19.839 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:19.839 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:19.839 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:19.839 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:19.839 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:19.839 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:19.839 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:21.223 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:21.223 13:32:23 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:21.223 13:32:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:21.223 13:32:23 -- common/autotest_common.sh@10 -- # set +x 00:03:21.223 13:32:23 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:21.223 13:32:23 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:03:21.223 13:32:23 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:03:21.223 13:32:23 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:21.223 13:32:23 -- common/autotest_common.sh@1563 -- # local bdfs 00:03:21.223 13:32:23 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:03:21.223 13:32:23 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:21.223 13:32:23 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:21.223 13:32:23 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:21.223 13:32:23 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:21.223 13:32:23 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:21.223 13:32:23 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:21.223 13:32:23 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:82:00.0 00:03:21.223 13:32:23 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:21.223 13:32:23 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:03:21.223 13:32:23 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:21.223 13:32:23 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:21.223 13:32:23 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:21.223 13:32:23 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:82:00.0 00:03:21.223 13:32:23 -- common/autotest_common.sh@1578 -- # [[ -z 0000:82:00.0 ]] 00:03:21.223 13:32:23 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=2480332 00:03:21.223 13:32:23 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:21.223 13:32:23 -- common/autotest_common.sh@1584 -- # waitforlisten 2480332 00:03:21.223 13:32:23 -- common/autotest_common.sh@817 -- # '[' -z 2480332 ']' 00:03:21.223 13:32:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:21.223 13:32:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:21.223 13:32:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:21.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:21.223 13:32:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:21.223 13:32:23 -- common/autotest_common.sh@10 -- # set +x 00:03:21.223 [2024-04-18 13:32:23.930065] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:03:21.223 [2024-04-18 13:32:23.930152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480332 ] 00:03:21.223 EAL: No free 2048 kB hugepages reported on node 1 00:03:21.223 [2024-04-18 13:32:23.989419] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:21.482 [2024-04-18 13:32:24.099062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:21.738 13:32:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:21.738 13:32:24 -- common/autotest_common.sh@850 -- # return 0 00:03:21.738 13:32:24 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:03:21.738 13:32:24 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:03:21.738 13:32:24 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:03:25.029 nvme0n1 00:03:25.029 13:32:27 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:25.029 [2024-04-18 13:32:27.671058] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:25.029 [2024-04-18 13:32:27.671106] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:25.029 request: 00:03:25.029 { 00:03:25.029 "nvme_ctrlr_name": "nvme0", 00:03:25.029 "password": "test", 00:03:25.029 "method": "bdev_nvme_opal_revert", 00:03:25.029 "req_id": 1 00:03:25.029 } 00:03:25.029 Got JSON-RPC error response 00:03:25.029 response: 00:03:25.029 { 00:03:25.029 "code": -32603, 00:03:25.029 "message": "Internal error" 00:03:25.029 } 00:03:25.029 13:32:27 -- common/autotest_common.sh@1590 -- # true 00:03:25.029 13:32:27 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:03:25.029 13:32:27 -- common/autotest_common.sh@1594 -- # killprocess 2480332 00:03:25.029 13:32:27 -- common/autotest_common.sh@936 -- # '[' -z 2480332 ']' 00:03:25.029 13:32:27 -- common/autotest_common.sh@940 -- # kill -0 2480332 00:03:25.029 13:32:27 -- common/autotest_common.sh@941 -- # uname 00:03:25.029 13:32:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:25.029 13:32:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2480332 00:03:25.029 13:32:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:25.029 13:32:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:25.029 13:32:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2480332' 00:03:25.029 killing process with pid 2480332 00:03:25.029 13:32:27 -- common/autotest_common.sh@955 -- # kill 2480332 00:03:25.029 13:32:27 -- common/autotest_common.sh@960 -- # wait 2480332 00:03:26.936 13:32:29 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:26.936 13:32:29 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:26.936 13:32:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:26.936 13:32:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:26.936 13:32:29 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:26.936 13:32:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:26.936 13:32:29 -- common/autotest_common.sh@10 -- # set +x 00:03:26.936 13:32:29 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:26.936 13:32:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.936 13:32:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.936 13:32:29 -- common/autotest_common.sh@10 -- # set +x 00:03:26.936 ************************************ 00:03:26.936 START TEST env 00:03:26.936 ************************************ 00:03:26.936 13:32:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:26.936 * Looking for test storage... 00:03:26.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:26.936 13:32:29 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:26.936 13:32:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.936 13:32:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.936 13:32:29 -- common/autotest_common.sh@10 -- # set +x 00:03:27.196 ************************************ 00:03:27.196 START TEST env_memory 00:03:27.196 ************************************ 00:03:27.196 13:32:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:27.196 00:03:27.196 00:03:27.196 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.196 http://cunit.sourceforge.net/ 00:03:27.196 00:03:27.196 00:03:27.196 Suite: memory 00:03:27.196 Test: alloc and free memory map ...[2024-04-18 13:32:29.840671] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:27.196 passed 00:03:27.196 Test: mem map translation ...[2024-04-18 13:32:29.861805] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:27.196 [2024-04-18 13:32:29.861827] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:27.196 [2024-04-18 13:32:29.861877] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:27.196 [2024-04-18 13:32:29.861889] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:27.196 passed 00:03:27.196 Test: mem map registration ...[2024-04-18 13:32:29.902905] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:27.196 [2024-04-18 13:32:29.902926] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:27.196 passed 00:03:27.196 Test: mem map adjacent registrations ...passed 00:03:27.196 00:03:27.196 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.196 suites 1 1 n/a 0 0 00:03:27.196 tests 4 4 4 0 0 00:03:27.196 asserts 152 152 152 0 n/a 00:03:27.196 00:03:27.196 Elapsed time = 0.143 seconds 00:03:27.196 00:03:27.196 real 0m0.150s 00:03:27.196 user 0m0.140s 00:03:27.196 sys 0m0.010s 00:03:27.196 13:32:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:27.196 13:32:29 -- common/autotest_common.sh@10 -- # set +x 00:03:27.196 ************************************ 00:03:27.196 END TEST env_memory 00:03:27.196 ************************************ 00:03:27.196 13:32:29 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:27.196 13:32:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:27.196 13:32:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:27.196 13:32:29 -- common/autotest_common.sh@10 -- # set +x 00:03:27.456 ************************************ 00:03:27.456 START TEST env_vtophys 00:03:27.456 ************************************ 00:03:27.456 13:32:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:27.456 EAL: lib.eal log level changed from notice to debug 00:03:27.456 EAL: Detected lcore 0 as core 0 on socket 0 00:03:27.456 EAL: Detected lcore 1 as core 1 on socket 0 00:03:27.456 EAL: Detected lcore 2 as core 2 on socket 0 00:03:27.456 EAL: Detected lcore 3 as core 3 on socket 0 00:03:27.456 EAL: Detected lcore 4 as core 4 on socket 0 00:03:27.456 EAL: Detected lcore 5 as core 5 on socket 0 00:03:27.456 EAL: Detected lcore 6 as core 8 on socket 0 00:03:27.457 EAL: Detected lcore 7 as core 9 on socket 0 00:03:27.457 EAL: Detected lcore 8 as core 10 on socket 0 00:03:27.457 EAL: Detected lcore 9 as core 11 on socket 0 00:03:27.457 EAL: Detected lcore 10 as core 12 on socket 0 00:03:27.457 EAL: Detected lcore 11 as core 13 on socket 0 00:03:27.457 EAL: Detected lcore 12 as core 0 on socket 1 00:03:27.457 EAL: Detected lcore 13 as core 1 on socket 1 00:03:27.457 EAL: Detected lcore 14 as core 2 on socket 1 00:03:27.457 EAL: Detected lcore 15 as core 3 on socket 1 00:03:27.457 EAL: Detected lcore 16 as core 4 on socket 1 00:03:27.457 EAL: Detected lcore 17 as core 5 on socket 1 00:03:27.457 EAL: Detected lcore 18 as core 8 on socket 1 00:03:27.457 EAL: Detected lcore 19 as core 9 on socket 1 00:03:27.457 EAL: Detected lcore 20 as core 10 on socket 1 00:03:27.457 EAL: Detected lcore 21 as core 11 on socket 1 00:03:27.457 EAL: Detected lcore 22 as core 12 on socket 1 00:03:27.457 EAL: Detected lcore 23 as core 13 on socket 1 00:03:27.457 EAL: Detected lcore 24 as core 0 on socket 0 00:03:27.457 EAL: Detected lcore 25 as core 1 on socket 0 00:03:27.457 EAL: Detected lcore 26 as core 2 on socket 0 00:03:27.457 EAL: Detected lcore 27 as core 3 on socket 0 00:03:27.457 EAL: Detected lcore 28 as core 4 on socket 0 00:03:27.457 EAL: Detected lcore 29 as core 5 on socket 0 00:03:27.457 EAL: Detected lcore 30 as core 8 on socket 0 00:03:27.457 EAL: Detected lcore 31 as core 9 on socket 0 00:03:27.457 EAL: Detected lcore 32 as core 10 on socket 0 00:03:27.457 EAL: Detected lcore 33 as core 11 on socket 0 00:03:27.457 EAL: Detected lcore 34 as core 12 on socket 0 00:03:27.457 EAL: Detected lcore 35 as core 13 on socket 0 00:03:27.457 EAL: Detected lcore 36 as core 0 on socket 1 00:03:27.457 EAL: Detected lcore 37 as core 1 on socket 1 00:03:27.457 EAL: Detected lcore 38 as core 2 on socket 1 00:03:27.457 EAL: Detected lcore 39 as core 3 on socket 1 00:03:27.457 EAL: Detected lcore 40 as core 4 on socket 1 00:03:27.457 EAL: Detected lcore 41 as core 5 on socket 1 00:03:27.457 EAL: Detected lcore 42 as core 8 on socket 1 00:03:27.457 EAL: Detected lcore 43 as core 9 on socket 1 00:03:27.457 EAL: Detected lcore 44 as core 10 on socket 1 00:03:27.457 EAL: Detected lcore 45 as core 11 on socket 1 00:03:27.457 EAL: Detected lcore 46 as core 12 on socket 1 00:03:27.457 EAL: Detected lcore 47 as core 13 on socket 1 00:03:27.457 EAL: Maximum logical cores by configuration: 128 00:03:27.457 EAL: Detected CPU lcores: 48 00:03:27.457 EAL: Detected NUMA nodes: 2 00:03:27.457 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:27.457 EAL: Detected shared linkage of DPDK 00:03:27.457 EAL: No shared files mode enabled, IPC will be disabled 00:03:27.457 EAL: Bus pci wants IOVA as 'DC' 00:03:27.457 EAL: Buses did not request a specific IOVA mode. 00:03:27.457 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:27.457 EAL: Selected IOVA mode 'VA' 00:03:27.457 EAL: No free 2048 kB hugepages reported on node 1 00:03:27.457 EAL: Probing VFIO support... 00:03:27.457 EAL: IOMMU type 1 (Type 1) is supported 00:03:27.457 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:27.457 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:27.457 EAL: VFIO support initialized 00:03:27.457 EAL: Ask a virtual area of 0x2e000 bytes 00:03:27.457 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:27.457 EAL: Setting up physically contiguous memory... 00:03:27.457 EAL: Setting maximum number of open files to 524288 00:03:27.457 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:27.457 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:27.457 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:27.457 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.457 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:27.457 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.457 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.457 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:27.457 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:27.457 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.457 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:27.457 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.457 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.457 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:27.457 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:27.457 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.457 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:27.457 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.457 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.457 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:27.457 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:27.457 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.457 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:27.457 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.457 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.457 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:27.457 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:27.457 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:27.457 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.457 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:27.457 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.457 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.457 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:27.457 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:27.457 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.457 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:27.457 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.457 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.457 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:27.457 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:27.457 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.457 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:27.457 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.457 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.457 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:27.457 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:27.457 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.457 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:27.457 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.457 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.457 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:27.457 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:27.457 EAL: Hugepages will be freed exactly as allocated. 00:03:27.457 EAL: No shared files mode enabled, IPC is disabled 00:03:27.457 EAL: No shared files mode enabled, IPC is disabled 00:03:27.457 EAL: TSC frequency is ~2700000 KHz 00:03:27.457 EAL: Main lcore 0 is ready (tid=7fd2db0e8a00;cpuset=[0]) 00:03:27.457 EAL: Trying to obtain current memory policy. 00:03:27.457 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.457 EAL: Restoring previous memory policy: 0 00:03:27.457 EAL: request: mp_malloc_sync 00:03:27.457 EAL: No shared files mode enabled, IPC is disabled 00:03:27.457 EAL: Heap on socket 0 was expanded by 2MB 00:03:27.457 EAL: No shared files mode enabled, IPC is disabled 00:03:27.457 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:27.457 EAL: Mem event callback 'spdk:(nil)' registered 00:03:27.457 00:03:27.457 00:03:27.457 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.457 http://cunit.sourceforge.net/ 00:03:27.457 00:03:27.457 00:03:27.457 Suite: components_suite 00:03:27.457 Test: vtophys_malloc_test ...passed 00:03:27.457 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:27.457 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.457 EAL: Restoring previous memory policy: 4 00:03:27.457 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.457 EAL: request: mp_malloc_sync 00:03:27.457 EAL: No shared files mode enabled, IPC is disabled 00:03:27.457 EAL: Heap on socket 0 was expanded by 4MB 00:03:27.457 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.457 EAL: request: mp_malloc_sync 00:03:27.457 EAL: No shared files mode enabled, IPC is disabled 00:03:27.457 EAL: Heap on socket 0 was shrunk by 4MB 00:03:27.457 EAL: Trying to obtain current memory policy. 00:03:27.457 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.457 EAL: Restoring previous memory policy: 4 00:03:27.457 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.457 EAL: request: mp_malloc_sync 00:03:27.457 EAL: No shared files mode enabled, IPC is disabled 00:03:27.457 EAL: Heap on socket 0 was expanded by 6MB 00:03:27.457 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.457 EAL: request: mp_malloc_sync 00:03:27.457 EAL: No shared files mode enabled, IPC is disabled 00:03:27.457 EAL: Heap on socket 0 was shrunk by 6MB 00:03:27.457 EAL: Trying to obtain current memory policy. 00:03:27.457 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.457 EAL: Restoring previous memory policy: 4 00:03:27.457 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.457 EAL: request: mp_malloc_sync 00:03:27.457 EAL: No shared files mode enabled, IPC is disabled 00:03:27.457 EAL: Heap on socket 0 was expanded by 10MB 00:03:27.457 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.457 EAL: request: mp_malloc_sync 00:03:27.457 EAL: No shared files mode enabled, IPC is disabled 00:03:27.457 EAL: Heap on socket 0 was shrunk by 10MB 00:03:27.457 EAL: Trying to obtain current memory policy. 00:03:27.457 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.457 EAL: Restoring previous memory policy: 4 00:03:27.457 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.457 EAL: request: mp_malloc_sync 00:03:27.457 EAL: No shared files mode enabled, IPC is disabled 00:03:27.457 EAL: Heap on socket 0 was expanded by 18MB 00:03:27.457 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.457 EAL: request: mp_malloc_sync 00:03:27.457 EAL: No shared files mode enabled, IPC is disabled 00:03:27.457 EAL: Heap on socket 0 was shrunk by 18MB 00:03:27.457 EAL: Trying to obtain current memory policy. 00:03:27.457 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.457 EAL: Restoring previous memory policy: 4 00:03:27.457 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.457 EAL: request: mp_malloc_sync 00:03:27.457 EAL: No shared files mode enabled, IPC is disabled 00:03:27.457 EAL: Heap on socket 0 was expanded by 34MB 00:03:27.457 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.457 EAL: request: mp_malloc_sync 00:03:27.457 EAL: No shared files mode enabled, IPC is disabled 00:03:27.457 EAL: Heap on socket 0 was shrunk by 34MB 00:03:27.457 EAL: Trying to obtain current memory policy. 00:03:27.457 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.458 EAL: Restoring previous memory policy: 4 00:03:27.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.458 EAL: request: mp_malloc_sync 00:03:27.458 EAL: No shared files mode enabled, IPC is disabled 00:03:27.458 EAL: Heap on socket 0 was expanded by 66MB 00:03:27.458 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.458 EAL: request: mp_malloc_sync 00:03:27.458 EAL: No shared files mode enabled, IPC is disabled 00:03:27.458 EAL: Heap on socket 0 was shrunk by 66MB 00:03:27.458 EAL: Trying to obtain current memory policy. 00:03:27.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.716 EAL: Restoring previous memory policy: 4 00:03:27.716 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.716 EAL: request: mp_malloc_sync 00:03:27.716 EAL: No shared files mode enabled, IPC is disabled 00:03:27.716 EAL: Heap on socket 0 was expanded by 130MB 00:03:27.716 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.716 EAL: request: mp_malloc_sync 00:03:27.716 EAL: No shared files mode enabled, IPC is disabled 00:03:27.716 EAL: Heap on socket 0 was shrunk by 130MB 00:03:27.716 EAL: Trying to obtain current memory policy. 00:03:27.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.716 EAL: Restoring previous memory policy: 4 00:03:27.716 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.716 EAL: request: mp_malloc_sync 00:03:27.716 EAL: No shared files mode enabled, IPC is disabled 00:03:27.716 EAL: Heap on socket 0 was expanded by 258MB 00:03:27.716 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.716 EAL: request: mp_malloc_sync 00:03:27.716 EAL: No shared files mode enabled, IPC is disabled 00:03:27.716 EAL: Heap on socket 0 was shrunk by 258MB 00:03:27.716 EAL: Trying to obtain current memory policy. 00:03:27.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.973 EAL: Restoring previous memory policy: 4 00:03:27.973 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.973 EAL: request: mp_malloc_sync 00:03:27.974 EAL: No shared files mode enabled, IPC is disabled 00:03:27.974 EAL: Heap on socket 0 was expanded by 514MB 00:03:27.974 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.233 EAL: request: mp_malloc_sync 00:03:28.233 EAL: No shared files mode enabled, IPC is disabled 00:03:28.233 EAL: Heap on socket 0 was shrunk by 514MB 00:03:28.233 EAL: Trying to obtain current memory policy. 00:03:28.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:28.494 EAL: Restoring previous memory policy: 4 00:03:28.494 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.494 EAL: request: mp_malloc_sync 00:03:28.494 EAL: No shared files mode enabled, IPC is disabled 00:03:28.494 EAL: Heap on socket 0 was expanded by 1026MB 00:03:28.753 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.011 EAL: request: mp_malloc_sync 00:03:29.011 EAL: No shared files mode enabled, IPC is disabled 00:03:29.011 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:29.011 passed 00:03:29.011 00:03:29.011 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.011 suites 1 1 n/a 0 0 00:03:29.011 tests 2 2 2 0 0 00:03:29.011 asserts 497 497 497 0 n/a 00:03:29.011 00:03:29.011 Elapsed time = 1.402 seconds 00:03:29.012 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.012 EAL: request: mp_malloc_sync 00:03:29.012 EAL: No shared files mode enabled, IPC is disabled 00:03:29.012 EAL: Heap on socket 0 was shrunk by 2MB 00:03:29.012 EAL: No shared files mode enabled, IPC is disabled 00:03:29.012 EAL: No shared files mode enabled, IPC is disabled 00:03:29.012 EAL: No shared files mode enabled, IPC is disabled 00:03:29.012 00:03:29.012 real 0m1.524s 00:03:29.012 user 0m0.869s 00:03:29.012 sys 0m0.620s 00:03:29.012 13:32:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:29.012 13:32:31 -- common/autotest_common.sh@10 -- # set +x 00:03:29.012 ************************************ 00:03:29.012 END TEST env_vtophys 00:03:29.012 ************************************ 00:03:29.012 13:32:31 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:29.012 13:32:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:29.012 13:32:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:29.012 13:32:31 -- common/autotest_common.sh@10 -- # set +x 00:03:29.012 ************************************ 00:03:29.012 START TEST env_pci 00:03:29.012 ************************************ 00:03:29.012 13:32:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:29.012 00:03:29.012 00:03:29.012 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.012 http://cunit.sourceforge.net/ 00:03:29.012 00:03:29.012 00:03:29.012 Suite: pci 00:03:29.012 Test: pci_hook ...[2024-04-18 13:32:31.742517] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2481369 has claimed it 00:03:29.012 EAL: Cannot find device (10000:00:01.0) 00:03:29.012 EAL: Failed to attach device on primary process 00:03:29.012 passed 00:03:29.012 00:03:29.012 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.012 suites 1 1 n/a 0 0 00:03:29.012 tests 1 1 1 0 0 00:03:29.012 asserts 25 25 25 0 n/a 00:03:29.012 00:03:29.012 Elapsed time = 0.023 seconds 00:03:29.012 00:03:29.012 real 0m0.034s 00:03:29.012 user 0m0.011s 00:03:29.012 sys 0m0.023s 00:03:29.012 13:32:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:29.012 13:32:31 -- common/autotest_common.sh@10 -- # set +x 00:03:29.012 ************************************ 00:03:29.012 END TEST env_pci 00:03:29.012 ************************************ 00:03:29.012 13:32:31 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:29.012 13:32:31 -- env/env.sh@15 -- # uname 00:03:29.012 13:32:31 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:29.012 13:32:31 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:29.012 13:32:31 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:29.012 13:32:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:03:29.012 13:32:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:29.012 13:32:31 -- common/autotest_common.sh@10 -- # set +x 00:03:29.273 ************************************ 00:03:29.273 START TEST env_dpdk_post_init 00:03:29.273 ************************************ 00:03:29.273 13:32:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:29.273 EAL: Detected CPU lcores: 48 00:03:29.273 EAL: Detected NUMA nodes: 2 00:03:29.273 EAL: Detected shared linkage of DPDK 00:03:29.273 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:29.273 EAL: Selected IOVA mode 'VA' 00:03:29.273 EAL: No free 2048 kB hugepages reported on node 1 00:03:29.273 EAL: VFIO support initialized 00:03:29.273 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:29.273 EAL: Using IOMMU type 1 (Type 1) 00:03:29.273 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:29.273 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:29.273 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:29.273 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:29.273 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:29.273 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:29.534 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:29.534 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:29.534 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:29.534 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:29.534 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:29.534 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:29.534 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:29.534 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:29.534 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:29.534 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:30.477 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:03:33.770 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:03:33.770 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:03:33.770 Starting DPDK initialization... 00:03:33.770 Starting SPDK post initialization... 00:03:33.770 SPDK NVMe probe 00:03:33.770 Attaching to 0000:82:00.0 00:03:33.770 Attached to 0000:82:00.0 00:03:33.770 Cleaning up... 00:03:33.770 00:03:33.770 real 0m4.397s 00:03:33.770 user 0m3.257s 00:03:33.770 sys 0m0.192s 00:03:33.770 13:32:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:33.770 13:32:36 -- common/autotest_common.sh@10 -- # set +x 00:03:33.770 ************************************ 00:03:33.770 END TEST env_dpdk_post_init 00:03:33.770 ************************************ 00:03:33.770 13:32:36 -- env/env.sh@26 -- # uname 00:03:33.770 13:32:36 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:33.770 13:32:36 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:33.770 13:32:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:33.770 13:32:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:33.770 13:32:36 -- common/autotest_common.sh@10 -- # set +x 00:03:33.770 ************************************ 00:03:33.770 START TEST env_mem_callbacks 00:03:33.770 ************************************ 00:03:33.770 13:32:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:33.770 EAL: Detected CPU lcores: 48 00:03:33.770 EAL: Detected NUMA nodes: 2 00:03:33.770 EAL: Detected shared linkage of DPDK 00:03:33.770 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:33.770 EAL: Selected IOVA mode 'VA' 00:03:33.770 EAL: No free 2048 kB hugepages reported on node 1 00:03:33.770 EAL: VFIO support initialized 00:03:33.770 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:33.770 00:03:33.770 00:03:33.770 CUnit - A unit testing framework for C - Version 2.1-3 00:03:33.770 http://cunit.sourceforge.net/ 00:03:33.770 00:03:33.770 00:03:33.770 Suite: memory 00:03:33.770 Test: test ... 00:03:33.770 register 0x200000200000 2097152 00:03:33.770 malloc 3145728 00:03:33.770 register 0x200000400000 4194304 00:03:33.770 buf 0x200000500000 len 3145728 PASSED 00:03:33.770 malloc 64 00:03:33.770 buf 0x2000004fff40 len 64 PASSED 00:03:33.770 malloc 4194304 00:03:33.770 register 0x200000800000 6291456 00:03:33.770 buf 0x200000a00000 len 4194304 PASSED 00:03:33.770 free 0x200000500000 3145728 00:03:33.770 free 0x2000004fff40 64 00:03:33.770 unregister 0x200000400000 4194304 PASSED 00:03:33.770 free 0x200000a00000 4194304 00:03:33.770 unregister 0x200000800000 6291456 PASSED 00:03:33.770 malloc 8388608 00:03:33.770 register 0x200000400000 10485760 00:03:33.770 buf 0x200000600000 len 8388608 PASSED 00:03:33.770 free 0x200000600000 8388608 00:03:33.770 unregister 0x200000400000 10485760 PASSED 00:03:33.770 passed 00:03:33.770 00:03:33.770 Run Summary: Type Total Ran Passed Failed Inactive 00:03:33.770 suites 1 1 n/a 0 0 00:03:33.770 tests 1 1 1 0 0 00:03:33.770 asserts 15 15 15 0 n/a 00:03:33.770 00:03:33.770 Elapsed time = 0.005 seconds 00:03:33.770 00:03:33.770 real 0m0.050s 00:03:33.770 user 0m0.015s 00:03:33.770 sys 0m0.035s 00:03:33.770 13:32:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:33.770 13:32:36 -- common/autotest_common.sh@10 -- # set +x 00:03:33.770 ************************************ 00:03:33.770 END TEST env_mem_callbacks 00:03:33.770 ************************************ 00:03:33.770 00:03:33.770 real 0m6.836s 00:03:33.770 user 0m4.539s 00:03:33.770 sys 0m1.269s 00:03:33.770 13:32:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:33.770 13:32:36 -- common/autotest_common.sh@10 -- # set +x 00:03:33.770 ************************************ 00:03:33.770 END TEST env 00:03:33.770 ************************************ 00:03:33.770 13:32:36 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:33.770 13:32:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:33.770 13:32:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:33.770 13:32:36 -- common/autotest_common.sh@10 -- # set +x 00:03:34.029 ************************************ 00:03:34.029 START TEST rpc 00:03:34.029 ************************************ 00:03:34.029 13:32:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:34.029 * Looking for test storage... 00:03:34.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:34.029 13:32:36 -- rpc/rpc.sh@65 -- # spdk_pid=2482048 00:03:34.029 13:32:36 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:34.029 13:32:36 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:34.029 13:32:36 -- rpc/rpc.sh@67 -- # waitforlisten 2482048 00:03:34.029 13:32:36 -- common/autotest_common.sh@817 -- # '[' -z 2482048 ']' 00:03:34.029 13:32:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:34.029 13:32:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:34.029 13:32:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:34.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:34.029 13:32:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:34.029 13:32:36 -- common/autotest_common.sh@10 -- # set +x 00:03:34.029 [2024-04-18 13:32:36.715875] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:03:34.029 [2024-04-18 13:32:36.715965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2482048 ] 00:03:34.029 EAL: No free 2048 kB hugepages reported on node 1 00:03:34.029 [2024-04-18 13:32:36.773830] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:34.289 [2024-04-18 13:32:36.877741] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:34.289 [2024-04-18 13:32:36.877798] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2482048' to capture a snapshot of events at runtime. 00:03:34.289 [2024-04-18 13:32:36.877825] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:34.289 [2024-04-18 13:32:36.877836] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:34.289 [2024-04-18 13:32:36.877846] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2482048 for offline analysis/debug. 00:03:34.289 [2024-04-18 13:32:36.877873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.549 13:32:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:34.549 13:32:37 -- common/autotest_common.sh@850 -- # return 0 00:03:34.549 13:32:37 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:34.549 13:32:37 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:34.549 13:32:37 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:34.549 13:32:37 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:34.549 13:32:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.549 13:32:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.549 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:34.549 ************************************ 00:03:34.549 START TEST rpc_integrity 00:03:34.549 ************************************ 00:03:34.549 13:32:37 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:03:34.549 13:32:37 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:34.549 13:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:34.549 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:34.549 13:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:34.549 13:32:37 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:34.549 13:32:37 -- rpc/rpc.sh@13 -- # jq length 00:03:34.549 13:32:37 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:34.549 13:32:37 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:34.549 13:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:34.549 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:34.549 13:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:34.549 13:32:37 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:34.549 13:32:37 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:34.549 13:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:34.549 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:34.549 13:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:34.549 13:32:37 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:34.549 { 00:03:34.549 "name": "Malloc0", 00:03:34.549 "aliases": [ 00:03:34.549 "61dfa90e-aeeb-433e-9a75-875de9558d00" 00:03:34.549 ], 00:03:34.549 "product_name": "Malloc disk", 00:03:34.549 "block_size": 512, 00:03:34.549 "num_blocks": 16384, 00:03:34.549 "uuid": "61dfa90e-aeeb-433e-9a75-875de9558d00", 00:03:34.549 "assigned_rate_limits": { 00:03:34.549 "rw_ios_per_sec": 0, 00:03:34.549 "rw_mbytes_per_sec": 0, 00:03:34.549 "r_mbytes_per_sec": 0, 00:03:34.549 "w_mbytes_per_sec": 0 00:03:34.549 }, 00:03:34.549 "claimed": false, 00:03:34.549 "zoned": false, 00:03:34.549 "supported_io_types": { 00:03:34.549 "read": true, 00:03:34.549 "write": true, 00:03:34.549 "unmap": true, 00:03:34.549 "write_zeroes": true, 00:03:34.549 "flush": true, 00:03:34.549 "reset": true, 00:03:34.549 "compare": false, 00:03:34.549 "compare_and_write": false, 00:03:34.549 "abort": true, 00:03:34.549 "nvme_admin": false, 00:03:34.549 "nvme_io": false 00:03:34.549 }, 00:03:34.549 "memory_domains": [ 00:03:34.549 { 00:03:34.549 "dma_device_id": "system", 00:03:34.549 "dma_device_type": 1 00:03:34.549 }, 00:03:34.549 { 00:03:34.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.549 "dma_device_type": 2 00:03:34.549 } 00:03:34.549 ], 00:03:34.549 "driver_specific": {} 00:03:34.549 } 00:03:34.549 ]' 00:03:34.549 13:32:37 -- rpc/rpc.sh@17 -- # jq length 00:03:34.809 13:32:37 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:34.809 13:32:37 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:34.809 13:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:34.809 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:34.809 [2024-04-18 13:32:37.366904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:34.809 [2024-04-18 13:32:37.366955] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:34.809 [2024-04-18 13:32:37.366979] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc3e380 00:03:34.809 [2024-04-18 13:32:37.367002] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:34.809 [2024-04-18 13:32:37.368469] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:34.809 [2024-04-18 13:32:37.368493] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:34.809 Passthru0 00:03:34.809 13:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:34.809 13:32:37 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:34.809 13:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:34.809 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:34.809 13:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:34.809 13:32:37 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:34.809 { 00:03:34.809 "name": "Malloc0", 00:03:34.809 "aliases": [ 00:03:34.809 "61dfa90e-aeeb-433e-9a75-875de9558d00" 00:03:34.809 ], 00:03:34.809 "product_name": "Malloc disk", 00:03:34.809 "block_size": 512, 00:03:34.809 "num_blocks": 16384, 00:03:34.809 "uuid": "61dfa90e-aeeb-433e-9a75-875de9558d00", 00:03:34.809 "assigned_rate_limits": { 00:03:34.809 "rw_ios_per_sec": 0, 00:03:34.809 "rw_mbytes_per_sec": 0, 00:03:34.809 "r_mbytes_per_sec": 0, 00:03:34.809 "w_mbytes_per_sec": 0 00:03:34.809 }, 00:03:34.809 "claimed": true, 00:03:34.809 "claim_type": "exclusive_write", 00:03:34.809 "zoned": false, 00:03:34.809 "supported_io_types": { 00:03:34.809 "read": true, 00:03:34.809 "write": true, 00:03:34.809 "unmap": true, 00:03:34.809 "write_zeroes": true, 00:03:34.809 "flush": true, 00:03:34.809 "reset": true, 00:03:34.809 "compare": false, 00:03:34.809 "compare_and_write": false, 00:03:34.809 "abort": true, 00:03:34.809 "nvme_admin": false, 00:03:34.809 "nvme_io": false 00:03:34.809 }, 00:03:34.809 "memory_domains": [ 00:03:34.809 { 00:03:34.809 "dma_device_id": "system", 00:03:34.809 "dma_device_type": 1 00:03:34.809 }, 00:03:34.809 { 00:03:34.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.809 "dma_device_type": 2 00:03:34.809 } 00:03:34.809 ], 00:03:34.809 "driver_specific": {} 00:03:34.809 }, 00:03:34.809 { 00:03:34.809 "name": "Passthru0", 00:03:34.809 "aliases": [ 00:03:34.809 "46765a44-349a-510b-8c9c-d9d3a6f802e0" 00:03:34.809 ], 00:03:34.809 "product_name": "passthru", 00:03:34.809 "block_size": 512, 00:03:34.810 "num_blocks": 16384, 00:03:34.810 "uuid": "46765a44-349a-510b-8c9c-d9d3a6f802e0", 00:03:34.810 "assigned_rate_limits": { 00:03:34.810 "rw_ios_per_sec": 0, 00:03:34.810 "rw_mbytes_per_sec": 0, 00:03:34.810 "r_mbytes_per_sec": 0, 00:03:34.810 "w_mbytes_per_sec": 0 00:03:34.810 }, 00:03:34.810 "claimed": false, 00:03:34.810 "zoned": false, 00:03:34.810 "supported_io_types": { 00:03:34.810 "read": true, 00:03:34.810 "write": true, 00:03:34.810 "unmap": true, 00:03:34.810 "write_zeroes": true, 00:03:34.810 "flush": true, 00:03:34.810 "reset": true, 00:03:34.810 "compare": false, 00:03:34.810 "compare_and_write": false, 00:03:34.810 "abort": true, 00:03:34.810 "nvme_admin": false, 00:03:34.810 "nvme_io": false 00:03:34.810 }, 00:03:34.810 "memory_domains": [ 00:03:34.810 { 00:03:34.810 "dma_device_id": "system", 00:03:34.810 "dma_device_type": 1 00:03:34.810 }, 00:03:34.810 { 00:03:34.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.810 "dma_device_type": 2 00:03:34.810 } 00:03:34.810 ], 00:03:34.810 "driver_specific": { 00:03:34.810 "passthru": { 00:03:34.810 "name": "Passthru0", 00:03:34.810 "base_bdev_name": "Malloc0" 00:03:34.810 } 00:03:34.810 } 00:03:34.810 } 00:03:34.810 ]' 00:03:34.810 13:32:37 -- rpc/rpc.sh@21 -- # jq length 00:03:34.810 13:32:37 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:34.810 13:32:37 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:34.810 13:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:34.810 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:34.810 13:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:34.810 13:32:37 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:34.810 13:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:34.810 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:34.810 13:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:34.810 13:32:37 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:34.810 13:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:34.810 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:34.810 13:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:34.810 13:32:37 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:34.810 13:32:37 -- rpc/rpc.sh@26 -- # jq length 00:03:34.810 13:32:37 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:34.810 00:03:34.810 real 0m0.229s 00:03:34.810 user 0m0.147s 00:03:34.810 sys 0m0.022s 00:03:34.810 13:32:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:34.810 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:34.810 ************************************ 00:03:34.810 END TEST rpc_integrity 00:03:34.810 ************************************ 00:03:34.810 13:32:37 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:34.810 13:32:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.810 13:32:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.810 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:34.810 ************************************ 00:03:34.810 START TEST rpc_plugins 00:03:34.810 ************************************ 00:03:34.810 13:32:37 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:03:34.810 13:32:37 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:34.810 13:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:34.810 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:34.810 13:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:34.810 13:32:37 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:34.810 13:32:37 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:34.810 13:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:34.810 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:35.069 13:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:35.069 13:32:37 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:35.069 { 00:03:35.069 "name": "Malloc1", 00:03:35.069 "aliases": [ 00:03:35.069 "b82616e1-d7d7-40fc-9327-7b1bd1fb383f" 00:03:35.069 ], 00:03:35.069 "product_name": "Malloc disk", 00:03:35.069 "block_size": 4096, 00:03:35.069 "num_blocks": 256, 00:03:35.069 "uuid": "b82616e1-d7d7-40fc-9327-7b1bd1fb383f", 00:03:35.069 "assigned_rate_limits": { 00:03:35.069 "rw_ios_per_sec": 0, 00:03:35.069 "rw_mbytes_per_sec": 0, 00:03:35.069 "r_mbytes_per_sec": 0, 00:03:35.069 "w_mbytes_per_sec": 0 00:03:35.069 }, 00:03:35.069 "claimed": false, 00:03:35.069 "zoned": false, 00:03:35.069 "supported_io_types": { 00:03:35.069 "read": true, 00:03:35.069 "write": true, 00:03:35.069 "unmap": true, 00:03:35.069 "write_zeroes": true, 00:03:35.069 "flush": true, 00:03:35.069 "reset": true, 00:03:35.069 "compare": false, 00:03:35.069 "compare_and_write": false, 00:03:35.069 "abort": true, 00:03:35.069 "nvme_admin": false, 00:03:35.069 "nvme_io": false 00:03:35.069 }, 00:03:35.069 "memory_domains": [ 00:03:35.069 { 00:03:35.069 "dma_device_id": "system", 00:03:35.069 "dma_device_type": 1 00:03:35.069 }, 00:03:35.069 { 00:03:35.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.069 "dma_device_type": 2 00:03:35.069 } 00:03:35.069 ], 00:03:35.069 "driver_specific": {} 00:03:35.069 } 00:03:35.069 ]' 00:03:35.069 13:32:37 -- rpc/rpc.sh@32 -- # jq length 00:03:35.069 13:32:37 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:35.069 13:32:37 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:35.069 13:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:35.069 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:35.069 13:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:35.069 13:32:37 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:35.069 13:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:35.069 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:35.069 13:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:35.069 13:32:37 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:35.069 13:32:37 -- rpc/rpc.sh@36 -- # jq length 00:03:35.069 13:32:37 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:35.069 00:03:35.069 real 0m0.112s 00:03:35.069 user 0m0.074s 00:03:35.069 sys 0m0.009s 00:03:35.069 13:32:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:35.069 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:35.069 ************************************ 00:03:35.069 END TEST rpc_plugins 00:03:35.069 ************************************ 00:03:35.069 13:32:37 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:35.069 13:32:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.069 13:32:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.069 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:35.069 ************************************ 00:03:35.069 START TEST rpc_trace_cmd_test 00:03:35.069 ************************************ 00:03:35.069 13:32:37 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:03:35.069 13:32:37 -- rpc/rpc.sh@40 -- # local info 00:03:35.069 13:32:37 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:35.069 13:32:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:35.069 13:32:37 -- common/autotest_common.sh@10 -- # set +x 00:03:35.069 13:32:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:35.069 13:32:37 -- rpc/rpc.sh@42 -- # info='{ 00:03:35.069 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2482048", 00:03:35.069 "tpoint_group_mask": "0x8", 00:03:35.069 "iscsi_conn": { 00:03:35.069 "mask": "0x2", 00:03:35.069 "tpoint_mask": "0x0" 00:03:35.069 }, 00:03:35.069 "scsi": { 00:03:35.069 "mask": "0x4", 00:03:35.069 "tpoint_mask": "0x0" 00:03:35.069 }, 00:03:35.069 "bdev": { 00:03:35.069 "mask": "0x8", 00:03:35.069 "tpoint_mask": "0xffffffffffffffff" 00:03:35.069 }, 00:03:35.069 "nvmf_rdma": { 00:03:35.069 "mask": "0x10", 00:03:35.069 "tpoint_mask": "0x0" 00:03:35.069 }, 00:03:35.069 "nvmf_tcp": { 00:03:35.069 "mask": "0x20", 00:03:35.069 "tpoint_mask": "0x0" 00:03:35.069 }, 00:03:35.069 "ftl": { 00:03:35.069 "mask": "0x40", 00:03:35.069 "tpoint_mask": "0x0" 00:03:35.069 }, 00:03:35.069 "blobfs": { 00:03:35.069 "mask": "0x80", 00:03:35.069 "tpoint_mask": "0x0" 00:03:35.069 }, 00:03:35.069 "dsa": { 00:03:35.069 "mask": "0x200", 00:03:35.069 "tpoint_mask": "0x0" 00:03:35.069 }, 00:03:35.069 "thread": { 00:03:35.069 "mask": "0x400", 00:03:35.069 "tpoint_mask": "0x0" 00:03:35.069 }, 00:03:35.069 "nvme_pcie": { 00:03:35.069 "mask": "0x800", 00:03:35.069 "tpoint_mask": "0x0" 00:03:35.069 }, 00:03:35.069 "iaa": { 00:03:35.069 "mask": "0x1000", 00:03:35.069 "tpoint_mask": "0x0" 00:03:35.069 }, 00:03:35.069 "nvme_tcp": { 00:03:35.069 "mask": "0x2000", 00:03:35.069 "tpoint_mask": "0x0" 00:03:35.069 }, 00:03:35.069 "bdev_nvme": { 00:03:35.069 "mask": "0x4000", 00:03:35.069 "tpoint_mask": "0x0" 00:03:35.069 }, 00:03:35.069 "sock": { 00:03:35.069 "mask": "0x8000", 00:03:35.069 "tpoint_mask": "0x0" 00:03:35.069 } 00:03:35.069 }' 00:03:35.069 13:32:37 -- rpc/rpc.sh@43 -- # jq length 00:03:35.327 13:32:37 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:35.327 13:32:37 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:35.327 13:32:37 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:35.327 13:32:37 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:35.327 13:32:37 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:35.327 13:32:37 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:35.327 13:32:37 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:35.327 13:32:37 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:35.327 13:32:38 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:35.327 00:03:35.327 real 0m0.197s 00:03:35.327 user 0m0.172s 00:03:35.327 sys 0m0.016s 00:03:35.327 13:32:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:35.327 13:32:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.327 ************************************ 00:03:35.327 END TEST rpc_trace_cmd_test 00:03:35.327 ************************************ 00:03:35.327 13:32:38 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:35.327 13:32:38 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:35.327 13:32:38 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:35.327 13:32:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.327 13:32:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.327 13:32:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.610 ************************************ 00:03:35.610 START TEST rpc_daemon_integrity 00:03:35.610 ************************************ 00:03:35.610 13:32:38 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:03:35.610 13:32:38 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:35.610 13:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:35.610 13:32:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.610 13:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:35.610 13:32:38 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:35.610 13:32:38 -- rpc/rpc.sh@13 -- # jq length 00:03:35.610 13:32:38 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:35.610 13:32:38 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:35.610 13:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:35.610 13:32:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.610 13:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:35.610 13:32:38 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:35.610 13:32:38 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:35.610 13:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:35.610 13:32:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.610 13:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:35.610 13:32:38 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:35.610 { 00:03:35.610 "name": "Malloc2", 00:03:35.610 "aliases": [ 00:03:35.610 "aba201a1-f971-4110-a8d4-c7de73d770f6" 00:03:35.610 ], 00:03:35.610 "product_name": "Malloc disk", 00:03:35.610 "block_size": 512, 00:03:35.610 "num_blocks": 16384, 00:03:35.610 "uuid": "aba201a1-f971-4110-a8d4-c7de73d770f6", 00:03:35.610 "assigned_rate_limits": { 00:03:35.610 "rw_ios_per_sec": 0, 00:03:35.610 "rw_mbytes_per_sec": 0, 00:03:35.610 "r_mbytes_per_sec": 0, 00:03:35.610 "w_mbytes_per_sec": 0 00:03:35.610 }, 00:03:35.610 "claimed": false, 00:03:35.610 "zoned": false, 00:03:35.610 "supported_io_types": { 00:03:35.610 "read": true, 00:03:35.610 "write": true, 00:03:35.610 "unmap": true, 00:03:35.610 "write_zeroes": true, 00:03:35.610 "flush": true, 00:03:35.610 "reset": true, 00:03:35.610 "compare": false, 00:03:35.610 "compare_and_write": false, 00:03:35.610 "abort": true, 00:03:35.610 "nvme_admin": false, 00:03:35.610 "nvme_io": false 00:03:35.610 }, 00:03:35.610 "memory_domains": [ 00:03:35.610 { 00:03:35.610 "dma_device_id": "system", 00:03:35.610 "dma_device_type": 1 00:03:35.610 }, 00:03:35.610 { 00:03:35.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.610 "dma_device_type": 2 00:03:35.610 } 00:03:35.610 ], 00:03:35.610 "driver_specific": {} 00:03:35.610 } 00:03:35.610 ]' 00:03:35.610 13:32:38 -- rpc/rpc.sh@17 -- # jq length 00:03:35.610 13:32:38 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:35.610 13:32:38 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:35.610 13:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:35.610 13:32:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.610 [2024-04-18 13:32:38.257968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:35.610 [2024-04-18 13:32:38.258019] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:35.610 [2024-04-18 13:32:38.258043] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcdc0e0 00:03:35.610 [2024-04-18 13:32:38.258058] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:35.610 [2024-04-18 13:32:38.259414] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:35.611 [2024-04-18 13:32:38.259443] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:35.611 Passthru0 00:03:35.611 13:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:35.611 13:32:38 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:35.611 13:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:35.611 13:32:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.611 13:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:35.611 13:32:38 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:35.611 { 00:03:35.611 "name": "Malloc2", 00:03:35.611 "aliases": [ 00:03:35.611 "aba201a1-f971-4110-a8d4-c7de73d770f6" 00:03:35.611 ], 00:03:35.611 "product_name": "Malloc disk", 00:03:35.611 "block_size": 512, 00:03:35.611 "num_blocks": 16384, 00:03:35.611 "uuid": "aba201a1-f971-4110-a8d4-c7de73d770f6", 00:03:35.611 "assigned_rate_limits": { 00:03:35.611 "rw_ios_per_sec": 0, 00:03:35.611 "rw_mbytes_per_sec": 0, 00:03:35.611 "r_mbytes_per_sec": 0, 00:03:35.611 "w_mbytes_per_sec": 0 00:03:35.611 }, 00:03:35.611 "claimed": true, 00:03:35.611 "claim_type": "exclusive_write", 00:03:35.611 "zoned": false, 00:03:35.611 "supported_io_types": { 00:03:35.611 "read": true, 00:03:35.611 "write": true, 00:03:35.611 "unmap": true, 00:03:35.611 "write_zeroes": true, 00:03:35.611 "flush": true, 00:03:35.611 "reset": true, 00:03:35.611 "compare": false, 00:03:35.611 "compare_and_write": false, 00:03:35.611 "abort": true, 00:03:35.611 "nvme_admin": false, 00:03:35.611 "nvme_io": false 00:03:35.611 }, 00:03:35.611 "memory_domains": [ 00:03:35.611 { 00:03:35.611 "dma_device_id": "system", 00:03:35.611 "dma_device_type": 1 00:03:35.611 }, 00:03:35.611 { 00:03:35.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.611 "dma_device_type": 2 00:03:35.611 } 00:03:35.611 ], 00:03:35.611 "driver_specific": {} 00:03:35.611 }, 00:03:35.611 { 00:03:35.611 "name": "Passthru0", 00:03:35.611 "aliases": [ 00:03:35.611 "2016c9da-f9bf-5457-8ee8-6309b685d9d2" 00:03:35.611 ], 00:03:35.611 "product_name": "passthru", 00:03:35.611 "block_size": 512, 00:03:35.611 "num_blocks": 16384, 00:03:35.611 "uuid": "2016c9da-f9bf-5457-8ee8-6309b685d9d2", 00:03:35.611 "assigned_rate_limits": { 00:03:35.611 "rw_ios_per_sec": 0, 00:03:35.611 "rw_mbytes_per_sec": 0, 00:03:35.611 "r_mbytes_per_sec": 0, 00:03:35.611 "w_mbytes_per_sec": 0 00:03:35.611 }, 00:03:35.611 "claimed": false, 00:03:35.611 "zoned": false, 00:03:35.611 "supported_io_types": { 00:03:35.611 "read": true, 00:03:35.611 "write": true, 00:03:35.611 "unmap": true, 00:03:35.611 "write_zeroes": true, 00:03:35.611 "flush": true, 00:03:35.611 "reset": true, 00:03:35.611 "compare": false, 00:03:35.611 "compare_and_write": false, 00:03:35.611 "abort": true, 00:03:35.611 "nvme_admin": false, 00:03:35.611 "nvme_io": false 00:03:35.611 }, 00:03:35.611 "memory_domains": [ 00:03:35.611 { 00:03:35.611 "dma_device_id": "system", 00:03:35.611 "dma_device_type": 1 00:03:35.611 }, 00:03:35.611 { 00:03:35.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.611 "dma_device_type": 2 00:03:35.611 } 00:03:35.611 ], 00:03:35.611 "driver_specific": { 00:03:35.611 "passthru": { 00:03:35.611 "name": "Passthru0", 00:03:35.611 "base_bdev_name": "Malloc2" 00:03:35.611 } 00:03:35.611 } 00:03:35.611 } 00:03:35.611 ]' 00:03:35.611 13:32:38 -- rpc/rpc.sh@21 -- # jq length 00:03:35.611 13:32:38 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:35.611 13:32:38 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:35.611 13:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:35.611 13:32:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.611 13:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:35.611 13:32:38 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:35.611 13:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:35.611 13:32:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.611 13:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:35.611 13:32:38 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:35.611 13:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:35.611 13:32:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.611 13:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:35.611 13:32:38 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:35.611 13:32:38 -- rpc/rpc.sh@26 -- # jq length 00:03:35.611 13:32:38 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:35.611 00:03:35.611 real 0m0.227s 00:03:35.611 user 0m0.147s 00:03:35.611 sys 0m0.024s 00:03:35.611 13:32:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:35.611 13:32:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.611 ************************************ 00:03:35.611 END TEST rpc_daemon_integrity 00:03:35.611 ************************************ 00:03:35.611 13:32:38 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:35.611 13:32:38 -- rpc/rpc.sh@84 -- # killprocess 2482048 00:03:35.871 13:32:38 -- common/autotest_common.sh@936 -- # '[' -z 2482048 ']' 00:03:35.871 13:32:38 -- common/autotest_common.sh@940 -- # kill -0 2482048 00:03:35.871 13:32:38 -- common/autotest_common.sh@941 -- # uname 00:03:35.871 13:32:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:35.871 13:32:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2482048 00:03:35.871 13:32:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:35.871 13:32:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:35.871 13:32:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2482048' 00:03:35.871 killing process with pid 2482048 00:03:35.871 13:32:38 -- common/autotest_common.sh@955 -- # kill 2482048 00:03:35.871 13:32:38 -- common/autotest_common.sh@960 -- # wait 2482048 00:03:36.131 00:03:36.131 real 0m2.286s 00:03:36.131 user 0m2.859s 00:03:36.131 sys 0m0.730s 00:03:36.131 13:32:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:36.131 13:32:38 -- common/autotest_common.sh@10 -- # set +x 00:03:36.131 ************************************ 00:03:36.131 END TEST rpc 00:03:36.131 ************************************ 00:03:36.131 13:32:38 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:36.131 13:32:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:36.131 13:32:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:36.131 13:32:38 -- common/autotest_common.sh@10 -- # set +x 00:03:36.389 ************************************ 00:03:36.389 START TEST skip_rpc 00:03:36.389 ************************************ 00:03:36.389 13:32:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:36.389 * Looking for test storage... 00:03:36.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:36.389 13:32:39 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:36.389 13:32:39 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:36.389 13:32:39 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:36.389 13:32:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:36.389 13:32:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:36.389 13:32:39 -- common/autotest_common.sh@10 -- # set +x 00:03:36.389 ************************************ 00:03:36.389 START TEST skip_rpc 00:03:36.389 ************************************ 00:03:36.389 13:32:39 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:03:36.389 13:32:39 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2482534 00:03:36.389 13:32:39 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:36.389 13:32:39 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:36.389 13:32:39 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:36.648 [2024-04-18 13:32:39.213795] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:03:36.648 [2024-04-18 13:32:39.213857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2482534 ] 00:03:36.648 EAL: No free 2048 kB hugepages reported on node 1 00:03:36.648 [2024-04-18 13:32:39.274262] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.648 [2024-04-18 13:32:39.390580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.926 13:32:44 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:41.926 13:32:44 -- common/autotest_common.sh@638 -- # local es=0 00:03:41.926 13:32:44 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:41.926 13:32:44 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:03:41.926 13:32:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:41.926 13:32:44 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:03:41.926 13:32:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:41.926 13:32:44 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:03:41.926 13:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:41.926 13:32:44 -- common/autotest_common.sh@10 -- # set +x 00:03:41.926 13:32:44 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:03:41.926 13:32:44 -- common/autotest_common.sh@641 -- # es=1 00:03:41.926 13:32:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:41.926 13:32:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:03:41.926 13:32:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:41.926 13:32:44 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:41.926 13:32:44 -- rpc/skip_rpc.sh@23 -- # killprocess 2482534 00:03:41.926 13:32:44 -- common/autotest_common.sh@936 -- # '[' -z 2482534 ']' 00:03:41.926 13:32:44 -- common/autotest_common.sh@940 -- # kill -0 2482534 00:03:41.926 13:32:44 -- common/autotest_common.sh@941 -- # uname 00:03:41.926 13:32:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:41.926 13:32:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2482534 00:03:41.926 13:32:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:41.926 13:32:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:41.926 13:32:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2482534' 00:03:41.926 killing process with pid 2482534 00:03:41.926 13:32:44 -- common/autotest_common.sh@955 -- # kill 2482534 00:03:41.926 13:32:44 -- common/autotest_common.sh@960 -- # wait 2482534 00:03:41.926 00:03:41.926 real 0m5.491s 00:03:41.926 user 0m5.173s 00:03:41.926 sys 0m0.322s 00:03:41.926 13:32:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:41.926 13:32:44 -- common/autotest_common.sh@10 -- # set +x 00:03:41.926 ************************************ 00:03:41.926 END TEST skip_rpc 00:03:41.926 ************************************ 00:03:41.926 13:32:44 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:41.926 13:32:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:41.926 13:32:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:41.926 13:32:44 -- common/autotest_common.sh@10 -- # set +x 00:03:42.183 ************************************ 00:03:42.183 START TEST skip_rpc_with_json 00:03:42.183 ************************************ 00:03:42.183 13:32:44 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:03:42.183 13:32:44 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:42.183 13:32:44 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2483233 00:03:42.184 13:32:44 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:42.184 13:32:44 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:42.184 13:32:44 -- rpc/skip_rpc.sh@31 -- # waitforlisten 2483233 00:03:42.184 13:32:44 -- common/autotest_common.sh@817 -- # '[' -z 2483233 ']' 00:03:42.184 13:32:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:42.184 13:32:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:42.184 13:32:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:42.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:42.184 13:32:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:42.184 13:32:44 -- common/autotest_common.sh@10 -- # set +x 00:03:42.184 [2024-04-18 13:32:44.830663] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:03:42.184 [2024-04-18 13:32:44.830763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2483233 ] 00:03:42.184 EAL: No free 2048 kB hugepages reported on node 1 00:03:42.184 [2024-04-18 13:32:44.893108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.442 [2024-04-18 13:32:45.003661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.010 13:32:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:43.010 13:32:45 -- common/autotest_common.sh@850 -- # return 0 00:03:43.010 13:32:45 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:43.010 13:32:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:43.010 13:32:45 -- common/autotest_common.sh@10 -- # set +x 00:03:43.010 [2024-04-18 13:32:45.750521] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:43.010 request: 00:03:43.010 { 00:03:43.010 "trtype": "tcp", 00:03:43.010 "method": "nvmf_get_transports", 00:03:43.010 "req_id": 1 00:03:43.010 } 00:03:43.010 Got JSON-RPC error response 00:03:43.010 response: 00:03:43.010 { 00:03:43.010 "code": -19, 00:03:43.010 "message": "No such device" 00:03:43.010 } 00:03:43.010 13:32:45 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:03:43.010 13:32:45 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:43.010 13:32:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:43.010 13:32:45 -- common/autotest_common.sh@10 -- # set +x 00:03:43.010 [2024-04-18 13:32:45.758633] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:43.010 13:32:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:43.010 13:32:45 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:43.010 13:32:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:43.010 13:32:45 -- common/autotest_common.sh@10 -- # set +x 00:03:43.270 13:32:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:43.270 13:32:45 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:43.270 { 00:03:43.270 "subsystems": [ 00:03:43.270 { 00:03:43.270 "subsystem": "vfio_user_target", 00:03:43.270 "config": null 00:03:43.270 }, 00:03:43.270 { 00:03:43.270 "subsystem": "keyring", 00:03:43.270 "config": [] 00:03:43.270 }, 00:03:43.270 { 00:03:43.270 "subsystem": "iobuf", 00:03:43.270 "config": [ 00:03:43.270 { 00:03:43.270 "method": "iobuf_set_options", 00:03:43.270 "params": { 00:03:43.270 "small_pool_count": 8192, 00:03:43.270 "large_pool_count": 1024, 00:03:43.270 "small_bufsize": 8192, 00:03:43.270 "large_bufsize": 135168 00:03:43.270 } 00:03:43.270 } 00:03:43.270 ] 00:03:43.270 }, 00:03:43.270 { 00:03:43.270 "subsystem": "sock", 00:03:43.270 "config": [ 00:03:43.270 { 00:03:43.270 "method": "sock_impl_set_options", 00:03:43.270 "params": { 00:03:43.270 "impl_name": "posix", 00:03:43.270 "recv_buf_size": 2097152, 00:03:43.270 "send_buf_size": 2097152, 00:03:43.270 "enable_recv_pipe": true, 00:03:43.270 "enable_quickack": false, 00:03:43.270 "enable_placement_id": 0, 00:03:43.270 "enable_zerocopy_send_server": true, 00:03:43.270 "enable_zerocopy_send_client": false, 00:03:43.270 "zerocopy_threshold": 0, 00:03:43.270 "tls_version": 0, 00:03:43.270 "enable_ktls": false 00:03:43.270 } 00:03:43.270 }, 00:03:43.270 { 00:03:43.270 "method": "sock_impl_set_options", 00:03:43.270 "params": { 00:03:43.270 "impl_name": "ssl", 00:03:43.270 "recv_buf_size": 4096, 00:03:43.270 "send_buf_size": 4096, 00:03:43.270 "enable_recv_pipe": true, 00:03:43.270 "enable_quickack": false, 00:03:43.270 "enable_placement_id": 0, 00:03:43.270 "enable_zerocopy_send_server": true, 00:03:43.270 "enable_zerocopy_send_client": false, 00:03:43.270 "zerocopy_threshold": 0, 00:03:43.270 "tls_version": 0, 00:03:43.270 "enable_ktls": false 00:03:43.270 } 00:03:43.270 } 00:03:43.270 ] 00:03:43.270 }, 00:03:43.270 { 00:03:43.270 "subsystem": "vmd", 00:03:43.270 "config": [] 00:03:43.270 }, 00:03:43.270 { 00:03:43.270 "subsystem": "accel", 00:03:43.270 "config": [ 00:03:43.270 { 00:03:43.270 "method": "accel_set_options", 00:03:43.270 "params": { 00:03:43.270 "small_cache_size": 128, 00:03:43.270 "large_cache_size": 16, 00:03:43.270 "task_count": 2048, 00:03:43.270 "sequence_count": 2048, 00:03:43.270 "buf_count": 2048 00:03:43.270 } 00:03:43.270 } 00:03:43.270 ] 00:03:43.270 }, 00:03:43.270 { 00:03:43.270 "subsystem": "bdev", 00:03:43.270 "config": [ 00:03:43.270 { 00:03:43.270 "method": "bdev_set_options", 00:03:43.270 "params": { 00:03:43.270 "bdev_io_pool_size": 65535, 00:03:43.270 "bdev_io_cache_size": 256, 00:03:43.270 "bdev_auto_examine": true, 00:03:43.270 "iobuf_small_cache_size": 128, 00:03:43.271 "iobuf_large_cache_size": 16 00:03:43.271 } 00:03:43.271 }, 00:03:43.271 { 00:03:43.271 "method": "bdev_raid_set_options", 00:03:43.271 "params": { 00:03:43.271 "process_window_size_kb": 1024 00:03:43.271 } 00:03:43.271 }, 00:03:43.271 { 00:03:43.271 "method": "bdev_iscsi_set_options", 00:03:43.271 "params": { 00:03:43.271 "timeout_sec": 30 00:03:43.271 } 00:03:43.271 }, 00:03:43.271 { 00:03:43.271 "method": "bdev_nvme_set_options", 00:03:43.271 "params": { 00:03:43.271 "action_on_timeout": "none", 00:03:43.271 "timeout_us": 0, 00:03:43.271 "timeout_admin_us": 0, 00:03:43.271 "keep_alive_timeout_ms": 10000, 00:03:43.271 "arbitration_burst": 0, 00:03:43.271 "low_priority_weight": 0, 00:03:43.271 "medium_priority_weight": 0, 00:03:43.271 "high_priority_weight": 0, 00:03:43.271 "nvme_adminq_poll_period_us": 10000, 00:03:43.271 "nvme_ioq_poll_period_us": 0, 00:03:43.271 "io_queue_requests": 0, 00:03:43.271 "delay_cmd_submit": true, 00:03:43.271 "transport_retry_count": 4, 00:03:43.271 "bdev_retry_count": 3, 00:03:43.271 "transport_ack_timeout": 0, 00:03:43.271 "ctrlr_loss_timeout_sec": 0, 00:03:43.271 "reconnect_delay_sec": 0, 00:03:43.271 "fast_io_fail_timeout_sec": 0, 00:03:43.271 "disable_auto_failback": false, 00:03:43.271 "generate_uuids": false, 00:03:43.271 "transport_tos": 0, 00:03:43.271 "nvme_error_stat": false, 00:03:43.271 "rdma_srq_size": 0, 00:03:43.271 "io_path_stat": false, 00:03:43.271 "allow_accel_sequence": false, 00:03:43.271 "rdma_max_cq_size": 0, 00:03:43.271 "rdma_cm_event_timeout_ms": 0, 00:03:43.271 "dhchap_digests": [ 00:03:43.271 "sha256", 00:03:43.271 "sha384", 00:03:43.271 "sha512" 00:03:43.271 ], 00:03:43.271 "dhchap_dhgroups": [ 00:03:43.271 "null", 00:03:43.271 "ffdhe2048", 00:03:43.271 "ffdhe3072", 00:03:43.271 "ffdhe4096", 00:03:43.271 "ffdhe6144", 00:03:43.271 "ffdhe8192" 00:03:43.271 ] 00:03:43.271 } 00:03:43.271 }, 00:03:43.271 { 00:03:43.271 "method": "bdev_nvme_set_hotplug", 00:03:43.271 "params": { 00:03:43.271 "period_us": 100000, 00:03:43.271 "enable": false 00:03:43.271 } 00:03:43.271 }, 00:03:43.271 { 00:03:43.271 "method": "bdev_wait_for_examine" 00:03:43.271 } 00:03:43.271 ] 00:03:43.271 }, 00:03:43.271 { 00:03:43.271 "subsystem": "scsi", 00:03:43.271 "config": null 00:03:43.271 }, 00:03:43.271 { 00:03:43.271 "subsystem": "scheduler", 00:03:43.271 "config": [ 00:03:43.271 { 00:03:43.271 "method": "framework_set_scheduler", 00:03:43.271 "params": { 00:03:43.271 "name": "static" 00:03:43.271 } 00:03:43.271 } 00:03:43.271 ] 00:03:43.271 }, 00:03:43.271 { 00:03:43.271 "subsystem": "vhost_scsi", 00:03:43.271 "config": [] 00:03:43.271 }, 00:03:43.271 { 00:03:43.271 "subsystem": "vhost_blk", 00:03:43.271 "config": [] 00:03:43.271 }, 00:03:43.271 { 00:03:43.271 "subsystem": "ublk", 00:03:43.271 "config": [] 00:03:43.271 }, 00:03:43.271 { 00:03:43.271 "subsystem": "nbd", 00:03:43.271 "config": [] 00:03:43.271 }, 00:03:43.271 { 00:03:43.271 "subsystem": "nvmf", 00:03:43.271 "config": [ 00:03:43.271 { 00:03:43.271 "method": "nvmf_set_config", 00:03:43.271 "params": { 00:03:43.271 "discovery_filter": "match_any", 00:03:43.271 "admin_cmd_passthru": { 00:03:43.271 "identify_ctrlr": false 00:03:43.271 } 00:03:43.271 } 00:03:43.271 }, 00:03:43.271 { 00:03:43.271 "method": "nvmf_set_max_subsystems", 00:03:43.271 "params": { 00:03:43.271 "max_subsystems": 1024 00:03:43.271 } 00:03:43.271 }, 00:03:43.271 { 00:03:43.271 "method": "nvmf_set_crdt", 00:03:43.271 "params": { 00:03:43.271 "crdt1": 0, 00:03:43.271 "crdt2": 0, 00:03:43.271 "crdt3": 0 00:03:43.271 } 00:03:43.271 }, 00:03:43.271 { 00:03:43.271 "method": "nvmf_create_transport", 00:03:43.271 "params": { 00:03:43.271 "trtype": "TCP", 00:03:43.271 "max_queue_depth": 128, 00:03:43.271 "max_io_qpairs_per_ctrlr": 127, 00:03:43.271 "in_capsule_data_size": 4096, 00:03:43.271 "max_io_size": 131072, 00:03:43.271 "io_unit_size": 131072, 00:03:43.271 "max_aq_depth": 128, 00:03:43.271 "num_shared_buffers": 511, 00:03:43.271 "buf_cache_size": 4294967295, 00:03:43.271 "dif_insert_or_strip": false, 00:03:43.271 "zcopy": false, 00:03:43.271 "c2h_success": true, 00:03:43.271 "sock_priority": 0, 00:03:43.271 "abort_timeout_sec": 1, 00:03:43.271 "ack_timeout": 0 00:03:43.271 } 00:03:43.271 } 00:03:43.271 ] 00:03:43.271 }, 00:03:43.271 { 00:03:43.271 "subsystem": "iscsi", 00:03:43.271 "config": [ 00:03:43.271 { 00:03:43.271 "method": "iscsi_set_options", 00:03:43.271 "params": { 00:03:43.271 "node_base": "iqn.2016-06.io.spdk", 00:03:43.271 "max_sessions": 128, 00:03:43.271 "max_connections_per_session": 2, 00:03:43.271 "max_queue_depth": 64, 00:03:43.271 "default_time2wait": 2, 00:03:43.271 "default_time2retain": 20, 00:03:43.271 "first_burst_length": 8192, 00:03:43.271 "immediate_data": true, 00:03:43.271 "allow_duplicated_isid": false, 00:03:43.271 "error_recovery_level": 0, 00:03:43.271 "nop_timeout": 60, 00:03:43.271 "nop_in_interval": 30, 00:03:43.271 "disable_chap": false, 00:03:43.271 "require_chap": false, 00:03:43.271 "mutual_chap": false, 00:03:43.271 "chap_group": 0, 00:03:43.271 "max_large_datain_per_connection": 64, 00:03:43.271 "max_r2t_per_connection": 4, 00:03:43.271 "pdu_pool_size": 36864, 00:03:43.271 "immediate_data_pool_size": 16384, 00:03:43.271 "data_out_pool_size": 2048 00:03:43.271 } 00:03:43.271 } 00:03:43.271 ] 00:03:43.271 } 00:03:43.271 ] 00:03:43.271 } 00:03:43.271 13:32:45 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:43.271 13:32:45 -- rpc/skip_rpc.sh@40 -- # killprocess 2483233 00:03:43.271 13:32:45 -- common/autotest_common.sh@936 -- # '[' -z 2483233 ']' 00:03:43.271 13:32:45 -- common/autotest_common.sh@940 -- # kill -0 2483233 00:03:43.271 13:32:45 -- common/autotest_common.sh@941 -- # uname 00:03:43.271 13:32:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:43.271 13:32:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2483233 00:03:43.271 13:32:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:43.271 13:32:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:43.271 13:32:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2483233' 00:03:43.271 killing process with pid 2483233 00:03:43.271 13:32:45 -- common/autotest_common.sh@955 -- # kill 2483233 00:03:43.271 13:32:45 -- common/autotest_common.sh@960 -- # wait 2483233 00:03:43.837 13:32:46 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2483490 00:03:43.837 13:32:46 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:43.837 13:32:46 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:49.110 13:32:51 -- rpc/skip_rpc.sh@50 -- # killprocess 2483490 00:03:49.110 13:32:51 -- common/autotest_common.sh@936 -- # '[' -z 2483490 ']' 00:03:49.110 13:32:51 -- common/autotest_common.sh@940 -- # kill -0 2483490 00:03:49.110 13:32:51 -- common/autotest_common.sh@941 -- # uname 00:03:49.110 13:32:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:49.110 13:32:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2483490 00:03:49.110 13:32:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:49.110 13:32:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:49.110 13:32:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2483490' 00:03:49.110 killing process with pid 2483490 00:03:49.110 13:32:51 -- common/autotest_common.sh@955 -- # kill 2483490 00:03:49.110 13:32:51 -- common/autotest_common.sh@960 -- # wait 2483490 00:03:49.369 13:32:51 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:49.369 13:32:51 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:49.369 00:03:49.369 real 0m7.143s 00:03:49.369 user 0m6.894s 00:03:49.369 sys 0m0.728s 00:03:49.369 13:32:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:49.369 13:32:51 -- common/autotest_common.sh@10 -- # set +x 00:03:49.369 ************************************ 00:03:49.369 END TEST skip_rpc_with_json 00:03:49.369 ************************************ 00:03:49.369 13:32:51 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:49.369 13:32:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:49.369 13:32:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:49.369 13:32:51 -- common/autotest_common.sh@10 -- # set +x 00:03:49.369 ************************************ 00:03:49.369 START TEST skip_rpc_with_delay 00:03:49.369 ************************************ 00:03:49.369 13:32:52 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:03:49.369 13:32:52 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:49.369 13:32:52 -- common/autotest_common.sh@638 -- # local es=0 00:03:49.369 13:32:52 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:49.369 13:32:52 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:49.369 13:32:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:49.369 13:32:52 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:49.369 13:32:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:49.369 13:32:52 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:49.369 13:32:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:49.369 13:32:52 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:49.369 13:32:52 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:49.369 13:32:52 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:49.369 [2024-04-18 13:32:52.098405] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:49.369 [2024-04-18 13:32:52.098547] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:49.369 13:32:52 -- common/autotest_common.sh@641 -- # es=1 00:03:49.369 13:32:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:49.369 13:32:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:03:49.369 13:32:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:49.369 00:03:49.369 real 0m0.069s 00:03:49.369 user 0m0.047s 00:03:49.369 sys 0m0.022s 00:03:49.369 13:32:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:49.369 13:32:52 -- common/autotest_common.sh@10 -- # set +x 00:03:49.369 ************************************ 00:03:49.369 END TEST skip_rpc_with_delay 00:03:49.369 ************************************ 00:03:49.369 13:32:52 -- rpc/skip_rpc.sh@77 -- # uname 00:03:49.369 13:32:52 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:49.369 13:32:52 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:49.369 13:32:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:49.369 13:32:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:49.369 13:32:52 -- common/autotest_common.sh@10 -- # set +x 00:03:49.628 ************************************ 00:03:49.628 START TEST exit_on_failed_rpc_init 00:03:49.628 ************************************ 00:03:49.628 13:32:52 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:03:49.628 13:32:52 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2484226 00:03:49.628 13:32:52 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:49.628 13:32:52 -- rpc/skip_rpc.sh@63 -- # waitforlisten 2484226 00:03:49.628 13:32:52 -- common/autotest_common.sh@817 -- # '[' -z 2484226 ']' 00:03:49.628 13:32:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:49.628 13:32:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:49.628 13:32:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:49.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:49.628 13:32:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:49.628 13:32:52 -- common/autotest_common.sh@10 -- # set +x 00:03:49.628 [2024-04-18 13:32:52.281920] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:03:49.628 [2024-04-18 13:32:52.282008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484226 ] 00:03:49.628 EAL: No free 2048 kB hugepages reported on node 1 00:03:49.628 [2024-04-18 13:32:52.339391] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.887 [2024-04-18 13:32:52.448322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.146 13:32:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:50.146 13:32:52 -- common/autotest_common.sh@850 -- # return 0 00:03:50.146 13:32:52 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:50.146 13:32:52 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:50.146 13:32:52 -- common/autotest_common.sh@638 -- # local es=0 00:03:50.146 13:32:52 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:50.146 13:32:52 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:50.146 13:32:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:50.146 13:32:52 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:50.146 13:32:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:50.146 13:32:52 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:50.146 13:32:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:50.146 13:32:52 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:50.146 13:32:52 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:50.146 13:32:52 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:50.146 [2024-04-18 13:32:52.763439] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:03:50.146 [2024-04-18 13:32:52.763551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484244 ] 00:03:50.146 EAL: No free 2048 kB hugepages reported on node 1 00:03:50.146 [2024-04-18 13:32:52.825450] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.146 [2024-04-18 13:32:52.944328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:50.146 [2024-04-18 13:32:52.944446] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:50.146 [2024-04-18 13:32:52.944464] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:50.146 [2024-04-18 13:32:52.944493] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:50.403 13:32:53 -- common/autotest_common.sh@641 -- # es=234 00:03:50.403 13:32:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:50.403 13:32:53 -- common/autotest_common.sh@650 -- # es=106 00:03:50.403 13:32:53 -- common/autotest_common.sh@651 -- # case "$es" in 00:03:50.403 13:32:53 -- common/autotest_common.sh@658 -- # es=1 00:03:50.403 13:32:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:50.403 13:32:53 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:50.403 13:32:53 -- rpc/skip_rpc.sh@70 -- # killprocess 2484226 00:03:50.403 13:32:53 -- common/autotest_common.sh@936 -- # '[' -z 2484226 ']' 00:03:50.403 13:32:53 -- common/autotest_common.sh@940 -- # kill -0 2484226 00:03:50.403 13:32:53 -- common/autotest_common.sh@941 -- # uname 00:03:50.403 13:32:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:50.403 13:32:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2484226 00:03:50.403 13:32:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:50.403 13:32:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:50.403 13:32:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2484226' 00:03:50.403 killing process with pid 2484226 00:03:50.403 13:32:53 -- common/autotest_common.sh@955 -- # kill 2484226 00:03:50.403 13:32:53 -- common/autotest_common.sh@960 -- # wait 2484226 00:03:50.970 00:03:50.970 real 0m1.347s 00:03:50.971 user 0m1.507s 00:03:50.971 sys 0m0.453s 00:03:50.971 13:32:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:50.971 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:03:50.971 ************************************ 00:03:50.971 END TEST exit_on_failed_rpc_init 00:03:50.971 ************************************ 00:03:50.971 13:32:53 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:50.971 00:03:50.971 real 0m14.583s 00:03:50.971 user 0m13.797s 00:03:50.971 sys 0m1.852s 00:03:50.971 13:32:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:50.971 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:03:50.971 ************************************ 00:03:50.971 END TEST skip_rpc 00:03:50.971 ************************************ 00:03:50.971 13:32:53 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:50.971 13:32:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.971 13:32:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.971 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:03:50.971 ************************************ 00:03:50.971 START TEST rpc_client 00:03:50.971 ************************************ 00:03:50.971 13:32:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:50.971 * Looking for test storage... 00:03:50.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:50.971 13:32:53 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:51.229 OK 00:03:51.229 13:32:53 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:51.229 00:03:51.229 real 0m0.069s 00:03:51.229 user 0m0.035s 00:03:51.229 sys 0m0.039s 00:03:51.229 13:32:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:51.229 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:03:51.229 ************************************ 00:03:51.229 END TEST rpc_client 00:03:51.229 ************************************ 00:03:51.229 13:32:53 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:51.229 13:32:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.229 13:32:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.229 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:03:51.229 ************************************ 00:03:51.229 START TEST json_config 00:03:51.229 ************************************ 00:03:51.229 13:32:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:51.229 13:32:53 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:51.229 13:32:53 -- nvmf/common.sh@7 -- # uname -s 00:03:51.229 13:32:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:51.229 13:32:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:51.229 13:32:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:51.229 13:32:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:51.229 13:32:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:51.229 13:32:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:51.229 13:32:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:51.229 13:32:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:51.229 13:32:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:51.229 13:32:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:51.229 13:32:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:03:51.229 13:32:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:03:51.229 13:32:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:51.229 13:32:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:51.229 13:32:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:51.229 13:32:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:51.229 13:32:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:51.229 13:32:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:51.229 13:32:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:51.229 13:32:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:51.230 13:32:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.230 13:32:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.230 13:32:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.230 13:32:53 -- paths/export.sh@5 -- # export PATH 00:03:51.230 13:32:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.230 13:32:53 -- nvmf/common.sh@47 -- # : 0 00:03:51.230 13:32:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:51.230 13:32:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:51.230 13:32:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:51.230 13:32:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:51.230 13:32:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:51.230 13:32:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:51.230 13:32:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:51.230 13:32:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:51.230 13:32:53 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:51.230 13:32:53 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:51.230 13:32:53 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:51.230 13:32:53 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:51.230 13:32:53 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:51.230 13:32:53 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:51.230 13:32:53 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:51.230 13:32:53 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:51.230 13:32:53 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:51.230 13:32:53 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:51.230 13:32:53 -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:51.230 13:32:53 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:51.230 13:32:53 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:51.230 13:32:53 -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:51.230 13:32:53 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:51.230 13:32:53 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:51.230 INFO: JSON configuration test init 00:03:51.230 13:32:53 -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:51.230 13:32:53 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:51.230 13:32:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:51.230 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:03:51.230 13:32:53 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:51.230 13:32:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:51.230 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:03:51.230 13:32:53 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:51.230 13:32:53 -- json_config/common.sh@9 -- # local app=target 00:03:51.230 13:32:53 -- json_config/common.sh@10 -- # shift 00:03:51.230 13:32:53 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:51.230 13:32:53 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:51.230 13:32:53 -- json_config/common.sh@15 -- # local app_extra_params= 00:03:51.230 13:32:53 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:51.230 13:32:53 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:51.230 13:32:53 -- json_config/common.sh@22 -- # app_pid["$app"]=2484502 00:03:51.230 13:32:53 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:51.230 13:32:53 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:51.230 Waiting for target to run... 00:03:51.230 13:32:53 -- json_config/common.sh@25 -- # waitforlisten 2484502 /var/tmp/spdk_tgt.sock 00:03:51.230 13:32:53 -- common/autotest_common.sh@817 -- # '[' -z 2484502 ']' 00:03:51.230 13:32:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:51.230 13:32:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:51.230 13:32:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:51.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:51.230 13:32:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:51.230 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:03:51.230 [2024-04-18 13:32:54.017224] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:03:51.230 [2024-04-18 13:32:54.017323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484502 ] 00:03:51.488 EAL: No free 2048 kB hugepages reported on node 1 00:03:51.748 [2024-04-18 13:32:54.529906] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.006 [2024-04-18 13:32:54.636079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.264 13:32:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:52.264 13:32:54 -- common/autotest_common.sh@850 -- # return 0 00:03:52.264 13:32:54 -- json_config/common.sh@26 -- # echo '' 00:03:52.264 00:03:52.264 13:32:54 -- json_config/json_config.sh@269 -- # create_accel_config 00:03:52.264 13:32:54 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:52.264 13:32:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:52.264 13:32:54 -- common/autotest_common.sh@10 -- # set +x 00:03:52.264 13:32:54 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:52.264 13:32:54 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:52.264 13:32:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:52.264 13:32:54 -- common/autotest_common.sh@10 -- # set +x 00:03:52.264 13:32:54 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:52.264 13:32:54 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:52.264 13:32:54 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:55.554 13:32:58 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:55.554 13:32:58 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:55.554 13:32:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:55.554 13:32:58 -- common/autotest_common.sh@10 -- # set +x 00:03:55.554 13:32:58 -- json_config/json_config.sh@45 -- # local ret=0 00:03:55.554 13:32:58 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:55.554 13:32:58 -- json_config/json_config.sh@46 -- # local enabled_types 00:03:55.554 13:32:58 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:55.554 13:32:58 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:55.554 13:32:58 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:55.843 13:32:58 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:55.843 13:32:58 -- json_config/json_config.sh@48 -- # local get_types 00:03:55.843 13:32:58 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:55.843 13:32:58 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:55.843 13:32:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:55.843 13:32:58 -- common/autotest_common.sh@10 -- # set +x 00:03:55.843 13:32:58 -- json_config/json_config.sh@55 -- # return 0 00:03:55.843 13:32:58 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:55.843 13:32:58 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:55.843 13:32:58 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:55.843 13:32:58 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:55.843 13:32:58 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:55.843 13:32:58 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:55.843 13:32:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:55.843 13:32:58 -- common/autotest_common.sh@10 -- # set +x 00:03:55.843 13:32:58 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:55.843 13:32:58 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:03:55.843 13:32:58 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:03:55.843 13:32:58 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:55.843 13:32:58 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:55.843 MallocForNvmf0 00:03:56.101 13:32:58 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:56.101 13:32:58 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:56.101 MallocForNvmf1 00:03:56.101 13:32:58 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:56.101 13:32:58 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:56.359 [2024-04-18 13:32:59.102811] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:56.359 13:32:59 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:56.359 13:32:59 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:56.616 13:32:59 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:56.616 13:32:59 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:56.874 13:32:59 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:56.874 13:32:59 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:57.132 13:32:59 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:57.132 13:32:59 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:57.390 [2024-04-18 13:33:00.066059] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:57.390 13:33:00 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:03:57.390 13:33:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:57.390 13:33:00 -- common/autotest_common.sh@10 -- # set +x 00:03:57.390 13:33:00 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:03:57.390 13:33:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:57.390 13:33:00 -- common/autotest_common.sh@10 -- # set +x 00:03:57.390 13:33:00 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:03:57.390 13:33:00 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:57.390 13:33:00 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:57.649 MallocBdevForConfigChangeCheck 00:03:57.649 13:33:00 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:03:57.649 13:33:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:57.649 13:33:00 -- common/autotest_common.sh@10 -- # set +x 00:03:57.649 13:33:00 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:03:57.649 13:33:00 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:58.217 13:33:00 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:03:58.217 INFO: shutting down applications... 00:03:58.217 13:33:00 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:03:58.217 13:33:00 -- json_config/json_config.sh@368 -- # json_config_clear target 00:03:58.217 13:33:00 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:03:58.217 13:33:00 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:00.123 Calling clear_iscsi_subsystem 00:04:00.123 Calling clear_nvmf_subsystem 00:04:00.123 Calling clear_nbd_subsystem 00:04:00.123 Calling clear_ublk_subsystem 00:04:00.123 Calling clear_vhost_blk_subsystem 00:04:00.123 Calling clear_vhost_scsi_subsystem 00:04:00.123 Calling clear_bdev_subsystem 00:04:00.123 13:33:02 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:00.123 13:33:02 -- json_config/json_config.sh@343 -- # count=100 00:04:00.123 13:33:02 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:00.123 13:33:02 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:00.123 13:33:02 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:00.123 13:33:02 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:00.123 13:33:02 -- json_config/json_config.sh@345 -- # break 00:04:00.123 13:33:02 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:00.123 13:33:02 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:00.123 13:33:02 -- json_config/common.sh@31 -- # local app=target 00:04:00.123 13:33:02 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:00.123 13:33:02 -- json_config/common.sh@35 -- # [[ -n 2484502 ]] 00:04:00.123 13:33:02 -- json_config/common.sh@38 -- # kill -SIGINT 2484502 00:04:00.123 13:33:02 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:00.123 13:33:02 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:00.123 13:33:02 -- json_config/common.sh@41 -- # kill -0 2484502 00:04:00.123 13:33:02 -- json_config/common.sh@45 -- # sleep 0.5 00:04:00.690 13:33:03 -- json_config/common.sh@40 -- # (( i++ )) 00:04:00.690 13:33:03 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:00.690 13:33:03 -- json_config/common.sh@41 -- # kill -0 2484502 00:04:00.690 13:33:03 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:00.690 13:33:03 -- json_config/common.sh@43 -- # break 00:04:00.690 13:33:03 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:00.690 13:33:03 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:00.690 SPDK target shutdown done 00:04:00.690 13:33:03 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:00.690 INFO: relaunching applications... 00:04:00.690 13:33:03 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.690 13:33:03 -- json_config/common.sh@9 -- # local app=target 00:04:00.690 13:33:03 -- json_config/common.sh@10 -- # shift 00:04:00.690 13:33:03 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:00.690 13:33:03 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:00.690 13:33:03 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:00.690 13:33:03 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:00.690 13:33:03 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:00.690 13:33:03 -- json_config/common.sh@22 -- # app_pid["$app"]=2485924 00:04:00.690 13:33:03 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.690 13:33:03 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:00.690 Waiting for target to run... 00:04:00.690 13:33:03 -- json_config/common.sh@25 -- # waitforlisten 2485924 /var/tmp/spdk_tgt.sock 00:04:00.690 13:33:03 -- common/autotest_common.sh@817 -- # '[' -z 2485924 ']' 00:04:00.690 13:33:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:00.690 13:33:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:00.690 13:33:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:00.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:00.690 13:33:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:00.690 13:33:03 -- common/autotest_common.sh@10 -- # set +x 00:04:00.690 [2024-04-18 13:33:03.391657] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:00.690 [2024-04-18 13:33:03.391747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485924 ] 00:04:00.690 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.258 [2024-04-18 13:33:03.934377] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.258 [2024-04-18 13:33:04.037937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.546 [2024-04-18 13:33:07.072132] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:04.546 [2024-04-18 13:33:07.104671] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:05.112 13:33:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:05.112 13:33:07 -- common/autotest_common.sh@850 -- # return 0 00:04:05.112 13:33:07 -- json_config/common.sh@26 -- # echo '' 00:04:05.112 00:04:05.112 13:33:07 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:05.112 13:33:07 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:05.112 INFO: Checking if target configuration is the same... 00:04:05.112 13:33:07 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.112 13:33:07 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:05.112 13:33:07 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:05.112 + '[' 2 -ne 2 ']' 00:04:05.112 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:05.112 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:05.112 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:05.112 +++ basename /dev/fd/62 00:04:05.112 ++ mktemp /tmp/62.XXX 00:04:05.112 + tmp_file_1=/tmp/62.nvM 00:04:05.112 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.112 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:05.112 + tmp_file_2=/tmp/spdk_tgt_config.json.hWz 00:04:05.112 + ret=0 00:04:05.112 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:05.681 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:05.681 + diff -u /tmp/62.nvM /tmp/spdk_tgt_config.json.hWz 00:04:05.681 + echo 'INFO: JSON config files are the same' 00:04:05.681 INFO: JSON config files are the same 00:04:05.681 + rm /tmp/62.nvM /tmp/spdk_tgt_config.json.hWz 00:04:05.681 + exit 0 00:04:05.681 13:33:08 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:05.681 13:33:08 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:05.681 INFO: changing configuration and checking if this can be detected... 00:04:05.681 13:33:08 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:05.681 13:33:08 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:05.940 13:33:08 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.940 13:33:08 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:05.940 13:33:08 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:05.940 + '[' 2 -ne 2 ']' 00:04:05.940 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:05.940 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:05.940 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:05.940 +++ basename /dev/fd/62 00:04:05.940 ++ mktemp /tmp/62.XXX 00:04:05.940 + tmp_file_1=/tmp/62.kr4 00:04:05.940 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.940 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:05.940 + tmp_file_2=/tmp/spdk_tgt_config.json.Lwq 00:04:05.940 + ret=0 00:04:05.940 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:06.198 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:06.198 + diff -u /tmp/62.kr4 /tmp/spdk_tgt_config.json.Lwq 00:04:06.198 + ret=1 00:04:06.198 + echo '=== Start of file: /tmp/62.kr4 ===' 00:04:06.198 + cat /tmp/62.kr4 00:04:06.198 + echo '=== End of file: /tmp/62.kr4 ===' 00:04:06.198 + echo '' 00:04:06.198 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Lwq ===' 00:04:06.198 + cat /tmp/spdk_tgt_config.json.Lwq 00:04:06.198 + echo '=== End of file: /tmp/spdk_tgt_config.json.Lwq ===' 00:04:06.198 + echo '' 00:04:06.198 + rm /tmp/62.kr4 /tmp/spdk_tgt_config.json.Lwq 00:04:06.198 + exit 1 00:04:06.198 13:33:08 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:06.199 INFO: configuration change detected. 00:04:06.199 13:33:08 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:06.199 13:33:08 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:06.199 13:33:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:06.199 13:33:08 -- common/autotest_common.sh@10 -- # set +x 00:04:06.199 13:33:08 -- json_config/json_config.sh@307 -- # local ret=0 00:04:06.199 13:33:08 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:06.199 13:33:08 -- json_config/json_config.sh@317 -- # [[ -n 2485924 ]] 00:04:06.199 13:33:08 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:06.199 13:33:08 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:06.199 13:33:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:06.199 13:33:08 -- common/autotest_common.sh@10 -- # set +x 00:04:06.199 13:33:08 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:06.199 13:33:08 -- json_config/json_config.sh@193 -- # uname -s 00:04:06.199 13:33:08 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:06.199 13:33:08 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:06.199 13:33:08 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:06.199 13:33:08 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:06.199 13:33:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:06.199 13:33:08 -- common/autotest_common.sh@10 -- # set +x 00:04:06.199 13:33:08 -- json_config/json_config.sh@323 -- # killprocess 2485924 00:04:06.199 13:33:08 -- common/autotest_common.sh@936 -- # '[' -z 2485924 ']' 00:04:06.199 13:33:08 -- common/autotest_common.sh@940 -- # kill -0 2485924 00:04:06.199 13:33:08 -- common/autotest_common.sh@941 -- # uname 00:04:06.199 13:33:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:06.199 13:33:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2485924 00:04:06.199 13:33:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:06.199 13:33:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:06.199 13:33:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2485924' 00:04:06.199 killing process with pid 2485924 00:04:06.199 13:33:08 -- common/autotest_common.sh@955 -- # kill 2485924 00:04:06.199 13:33:08 -- common/autotest_common.sh@960 -- # wait 2485924 00:04:08.102 13:33:10 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.102 13:33:10 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:08.102 13:33:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:08.102 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:04:08.102 13:33:10 -- json_config/json_config.sh@328 -- # return 0 00:04:08.102 13:33:10 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:08.102 INFO: Success 00:04:08.102 00:04:08.102 real 0m16.760s 00:04:08.102 user 0m18.506s 00:04:08.102 sys 0m2.268s 00:04:08.102 13:33:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:08.102 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:04:08.102 ************************************ 00:04:08.102 END TEST json_config 00:04:08.102 ************************************ 00:04:08.102 13:33:10 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:08.102 13:33:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.102 13:33:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.102 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:04:08.102 ************************************ 00:04:08.102 START TEST json_config_extra_key 00:04:08.102 ************************************ 00:04:08.102 13:33:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:08.102 13:33:10 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:08.102 13:33:10 -- nvmf/common.sh@7 -- # uname -s 00:04:08.102 13:33:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:08.102 13:33:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:08.102 13:33:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:08.102 13:33:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:08.102 13:33:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:08.102 13:33:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:08.102 13:33:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:08.102 13:33:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:08.102 13:33:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:08.102 13:33:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:08.102 13:33:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:04:08.102 13:33:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:04:08.102 13:33:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:08.102 13:33:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:08.102 13:33:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:08.102 13:33:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:08.102 13:33:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:08.102 13:33:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:08.102 13:33:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:08.102 13:33:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:08.102 13:33:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.102 13:33:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.102 13:33:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.102 13:33:10 -- paths/export.sh@5 -- # export PATH 00:04:08.102 13:33:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.102 13:33:10 -- nvmf/common.sh@47 -- # : 0 00:04:08.102 13:33:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:08.102 13:33:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:08.102 13:33:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:08.102 13:33:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:08.102 13:33:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:08.102 13:33:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:08.102 13:33:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:08.102 13:33:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:08.102 13:33:10 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:08.102 13:33:10 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:08.102 13:33:10 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:08.102 13:33:10 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:08.102 13:33:10 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:08.102 13:33:10 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:08.102 13:33:10 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:08.102 13:33:10 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:08.102 13:33:10 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:08.102 13:33:10 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:08.102 13:33:10 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:08.102 INFO: launching applications... 00:04:08.102 13:33:10 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:08.102 13:33:10 -- json_config/common.sh@9 -- # local app=target 00:04:08.102 13:33:10 -- json_config/common.sh@10 -- # shift 00:04:08.102 13:33:10 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:08.103 13:33:10 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:08.103 13:33:10 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:08.103 13:33:10 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:08.103 13:33:10 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:08.103 13:33:10 -- json_config/common.sh@22 -- # app_pid["$app"]=2487360 00:04:08.103 13:33:10 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:08.103 13:33:10 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:08.103 Waiting for target to run... 00:04:08.103 13:33:10 -- json_config/common.sh@25 -- # waitforlisten 2487360 /var/tmp/spdk_tgt.sock 00:04:08.103 13:33:10 -- common/autotest_common.sh@817 -- # '[' -z 2487360 ']' 00:04:08.103 13:33:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:08.103 13:33:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:08.103 13:33:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:08.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:08.103 13:33:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:08.103 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:04:08.103 [2024-04-18 13:33:10.894688] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:08.103 [2024-04-18 13:33:10.894770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2487360 ] 00:04:08.361 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.621 [2024-04-18 13:33:11.245040] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.621 [2024-04-18 13:33:11.331062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.190 13:33:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:09.190 13:33:11 -- common/autotest_common.sh@850 -- # return 0 00:04:09.190 13:33:11 -- json_config/common.sh@26 -- # echo '' 00:04:09.190 00:04:09.190 13:33:11 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:09.190 INFO: shutting down applications... 00:04:09.190 13:33:11 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:09.190 13:33:11 -- json_config/common.sh@31 -- # local app=target 00:04:09.190 13:33:11 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:09.190 13:33:11 -- json_config/common.sh@35 -- # [[ -n 2487360 ]] 00:04:09.190 13:33:11 -- json_config/common.sh@38 -- # kill -SIGINT 2487360 00:04:09.190 13:33:11 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:09.190 13:33:11 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:09.190 13:33:11 -- json_config/common.sh@41 -- # kill -0 2487360 00:04:09.190 13:33:11 -- json_config/common.sh@45 -- # sleep 0.5 00:04:09.757 13:33:12 -- json_config/common.sh@40 -- # (( i++ )) 00:04:09.757 13:33:12 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:09.757 13:33:12 -- json_config/common.sh@41 -- # kill -0 2487360 00:04:09.757 13:33:12 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:09.757 13:33:12 -- json_config/common.sh@43 -- # break 00:04:09.757 13:33:12 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:09.757 13:33:12 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:09.757 SPDK target shutdown done 00:04:09.757 13:33:12 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:09.757 Success 00:04:09.757 00:04:09.757 real 0m1.542s 00:04:09.757 user 0m1.546s 00:04:09.757 sys 0m0.441s 00:04:09.757 13:33:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:09.757 13:33:12 -- common/autotest_common.sh@10 -- # set +x 00:04:09.757 ************************************ 00:04:09.757 END TEST json_config_extra_key 00:04:09.757 ************************************ 00:04:09.757 13:33:12 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:09.757 13:33:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.757 13:33:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.757 13:33:12 -- common/autotest_common.sh@10 -- # set +x 00:04:09.757 ************************************ 00:04:09.757 START TEST alias_rpc 00:04:09.757 ************************************ 00:04:09.757 13:33:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:09.757 * Looking for test storage... 00:04:09.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:09.757 13:33:12 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:09.757 13:33:12 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2487674 00:04:09.757 13:33:12 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.757 13:33:12 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2487674 00:04:09.757 13:33:12 -- common/autotest_common.sh@817 -- # '[' -z 2487674 ']' 00:04:09.757 13:33:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.757 13:33:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:09.757 13:33:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.757 13:33:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:09.757 13:33:12 -- common/autotest_common.sh@10 -- # set +x 00:04:09.757 [2024-04-18 13:33:12.563921] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:09.757 [2024-04-18 13:33:12.564004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2487674 ] 00:04:10.015 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.015 [2024-04-18 13:33:12.621482] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.015 [2024-04-18 13:33:12.725063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.274 13:33:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:10.274 13:33:12 -- common/autotest_common.sh@850 -- # return 0 00:04:10.274 13:33:12 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:10.533 13:33:13 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2487674 00:04:10.533 13:33:13 -- common/autotest_common.sh@936 -- # '[' -z 2487674 ']' 00:04:10.533 13:33:13 -- common/autotest_common.sh@940 -- # kill -0 2487674 00:04:10.533 13:33:13 -- common/autotest_common.sh@941 -- # uname 00:04:10.533 13:33:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:10.533 13:33:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2487674 00:04:10.533 13:33:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:10.533 13:33:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:10.533 13:33:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2487674' 00:04:10.533 killing process with pid 2487674 00:04:10.533 13:33:13 -- common/autotest_common.sh@955 -- # kill 2487674 00:04:10.533 13:33:13 -- common/autotest_common.sh@960 -- # wait 2487674 00:04:11.101 00:04:11.101 real 0m1.284s 00:04:11.101 user 0m1.355s 00:04:11.101 sys 0m0.420s 00:04:11.101 13:33:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:11.101 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:04:11.101 ************************************ 00:04:11.101 END TEST alias_rpc 00:04:11.101 ************************************ 00:04:11.101 13:33:13 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:11.101 13:33:13 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:11.101 13:33:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.101 13:33:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.101 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:04:11.101 ************************************ 00:04:11.101 START TEST spdkcli_tcp 00:04:11.101 ************************************ 00:04:11.101 13:33:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:11.359 * Looking for test storage... 00:04:11.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:11.359 13:33:13 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:11.359 13:33:13 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:11.359 13:33:13 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:11.359 13:33:13 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:11.359 13:33:13 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:11.359 13:33:13 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:11.359 13:33:13 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:11.359 13:33:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:11.359 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:04:11.359 13:33:13 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2487875 00:04:11.359 13:33:13 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:11.359 13:33:13 -- spdkcli/tcp.sh@27 -- # waitforlisten 2487875 00:04:11.359 13:33:13 -- common/autotest_common.sh@817 -- # '[' -z 2487875 ']' 00:04:11.359 13:33:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.359 13:33:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:11.359 13:33:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.359 13:33:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:11.359 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:04:11.359 [2024-04-18 13:33:13.975144] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:11.360 [2024-04-18 13:33:13.975249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2487875 ] 00:04:11.360 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.360 [2024-04-18 13:33:14.039428] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:11.360 [2024-04-18 13:33:14.156516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.360 [2024-04-18 13:33:14.156520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.323 13:33:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:12.323 13:33:14 -- common/autotest_common.sh@850 -- # return 0 00:04:12.323 13:33:14 -- spdkcli/tcp.sh@31 -- # socat_pid=2488014 00:04:12.323 13:33:14 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:12.323 13:33:14 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:12.323 [ 00:04:12.324 "bdev_malloc_delete", 00:04:12.324 "bdev_malloc_create", 00:04:12.324 "bdev_null_resize", 00:04:12.324 "bdev_null_delete", 00:04:12.324 "bdev_null_create", 00:04:12.324 "bdev_nvme_cuse_unregister", 00:04:12.324 "bdev_nvme_cuse_register", 00:04:12.324 "bdev_opal_new_user", 00:04:12.324 "bdev_opal_set_lock_state", 00:04:12.324 "bdev_opal_delete", 00:04:12.324 "bdev_opal_get_info", 00:04:12.324 "bdev_opal_create", 00:04:12.324 "bdev_nvme_opal_revert", 00:04:12.324 "bdev_nvme_opal_init", 00:04:12.324 "bdev_nvme_send_cmd", 00:04:12.324 "bdev_nvme_get_path_iostat", 00:04:12.324 "bdev_nvme_get_mdns_discovery_info", 00:04:12.324 "bdev_nvme_stop_mdns_discovery", 00:04:12.324 "bdev_nvme_start_mdns_discovery", 00:04:12.324 "bdev_nvme_set_multipath_policy", 00:04:12.324 "bdev_nvme_set_preferred_path", 00:04:12.324 "bdev_nvme_get_io_paths", 00:04:12.324 "bdev_nvme_remove_error_injection", 00:04:12.324 "bdev_nvme_add_error_injection", 00:04:12.324 "bdev_nvme_get_discovery_info", 00:04:12.324 "bdev_nvme_stop_discovery", 00:04:12.324 "bdev_nvme_start_discovery", 00:04:12.324 "bdev_nvme_get_controller_health_info", 00:04:12.324 "bdev_nvme_disable_controller", 00:04:12.324 "bdev_nvme_enable_controller", 00:04:12.324 "bdev_nvme_reset_controller", 00:04:12.324 "bdev_nvme_get_transport_statistics", 00:04:12.324 "bdev_nvme_apply_firmware", 00:04:12.324 "bdev_nvme_detach_controller", 00:04:12.324 "bdev_nvme_get_controllers", 00:04:12.324 "bdev_nvme_attach_controller", 00:04:12.324 "bdev_nvme_set_hotplug", 00:04:12.324 "bdev_nvme_set_options", 00:04:12.324 "bdev_passthru_delete", 00:04:12.324 "bdev_passthru_create", 00:04:12.324 "bdev_lvol_grow_lvstore", 00:04:12.324 "bdev_lvol_get_lvols", 00:04:12.324 "bdev_lvol_get_lvstores", 00:04:12.324 "bdev_lvol_delete", 00:04:12.324 "bdev_lvol_set_read_only", 00:04:12.324 "bdev_lvol_resize", 00:04:12.324 "bdev_lvol_decouple_parent", 00:04:12.324 "bdev_lvol_inflate", 00:04:12.324 "bdev_lvol_rename", 00:04:12.324 "bdev_lvol_clone_bdev", 00:04:12.324 "bdev_lvol_clone", 00:04:12.324 "bdev_lvol_snapshot", 00:04:12.324 "bdev_lvol_create", 00:04:12.324 "bdev_lvol_delete_lvstore", 00:04:12.324 "bdev_lvol_rename_lvstore", 00:04:12.324 "bdev_lvol_create_lvstore", 00:04:12.324 "bdev_raid_set_options", 00:04:12.324 "bdev_raid_remove_base_bdev", 00:04:12.324 "bdev_raid_add_base_bdev", 00:04:12.324 "bdev_raid_delete", 00:04:12.324 "bdev_raid_create", 00:04:12.324 "bdev_raid_get_bdevs", 00:04:12.324 "bdev_error_inject_error", 00:04:12.324 "bdev_error_delete", 00:04:12.324 "bdev_error_create", 00:04:12.324 "bdev_split_delete", 00:04:12.324 "bdev_split_create", 00:04:12.324 "bdev_delay_delete", 00:04:12.324 "bdev_delay_create", 00:04:12.324 "bdev_delay_update_latency", 00:04:12.324 "bdev_zone_block_delete", 00:04:12.324 "bdev_zone_block_create", 00:04:12.324 "blobfs_create", 00:04:12.324 "blobfs_detect", 00:04:12.324 "blobfs_set_cache_size", 00:04:12.324 "bdev_aio_delete", 00:04:12.324 "bdev_aio_rescan", 00:04:12.324 "bdev_aio_create", 00:04:12.324 "bdev_ftl_set_property", 00:04:12.324 "bdev_ftl_get_properties", 00:04:12.324 "bdev_ftl_get_stats", 00:04:12.324 "bdev_ftl_unmap", 00:04:12.324 "bdev_ftl_unload", 00:04:12.324 "bdev_ftl_delete", 00:04:12.324 "bdev_ftl_load", 00:04:12.324 "bdev_ftl_create", 00:04:12.324 "bdev_virtio_attach_controller", 00:04:12.324 "bdev_virtio_scsi_get_devices", 00:04:12.324 "bdev_virtio_detach_controller", 00:04:12.324 "bdev_virtio_blk_set_hotplug", 00:04:12.324 "bdev_iscsi_delete", 00:04:12.324 "bdev_iscsi_create", 00:04:12.324 "bdev_iscsi_set_options", 00:04:12.324 "accel_error_inject_error", 00:04:12.324 "ioat_scan_accel_module", 00:04:12.324 "dsa_scan_accel_module", 00:04:12.324 "iaa_scan_accel_module", 00:04:12.324 "vfu_virtio_create_scsi_endpoint", 00:04:12.324 "vfu_virtio_scsi_remove_target", 00:04:12.324 "vfu_virtio_scsi_add_target", 00:04:12.324 "vfu_virtio_create_blk_endpoint", 00:04:12.324 "vfu_virtio_delete_endpoint", 00:04:12.324 "keyring_file_remove_key", 00:04:12.324 "keyring_file_add_key", 00:04:12.324 "iscsi_set_options", 00:04:12.324 "iscsi_get_auth_groups", 00:04:12.324 "iscsi_auth_group_remove_secret", 00:04:12.324 "iscsi_auth_group_add_secret", 00:04:12.324 "iscsi_delete_auth_group", 00:04:12.324 "iscsi_create_auth_group", 00:04:12.324 "iscsi_set_discovery_auth", 00:04:12.324 "iscsi_get_options", 00:04:12.324 "iscsi_target_node_request_logout", 00:04:12.324 "iscsi_target_node_set_redirect", 00:04:12.324 "iscsi_target_node_set_auth", 00:04:12.324 "iscsi_target_node_add_lun", 00:04:12.324 "iscsi_get_stats", 00:04:12.324 "iscsi_get_connections", 00:04:12.324 "iscsi_portal_group_set_auth", 00:04:12.324 "iscsi_start_portal_group", 00:04:12.324 "iscsi_delete_portal_group", 00:04:12.324 "iscsi_create_portal_group", 00:04:12.324 "iscsi_get_portal_groups", 00:04:12.324 "iscsi_delete_target_node", 00:04:12.324 "iscsi_target_node_remove_pg_ig_maps", 00:04:12.324 "iscsi_target_node_add_pg_ig_maps", 00:04:12.324 "iscsi_create_target_node", 00:04:12.324 "iscsi_get_target_nodes", 00:04:12.324 "iscsi_delete_initiator_group", 00:04:12.324 "iscsi_initiator_group_remove_initiators", 00:04:12.324 "iscsi_initiator_group_add_initiators", 00:04:12.324 "iscsi_create_initiator_group", 00:04:12.324 "iscsi_get_initiator_groups", 00:04:12.324 "nvmf_set_crdt", 00:04:12.324 "nvmf_set_config", 00:04:12.324 "nvmf_set_max_subsystems", 00:04:12.324 "nvmf_subsystem_get_listeners", 00:04:12.324 "nvmf_subsystem_get_qpairs", 00:04:12.324 "nvmf_subsystem_get_controllers", 00:04:12.324 "nvmf_get_stats", 00:04:12.324 "nvmf_get_transports", 00:04:12.324 "nvmf_create_transport", 00:04:12.324 "nvmf_get_targets", 00:04:12.324 "nvmf_delete_target", 00:04:12.324 "nvmf_create_target", 00:04:12.324 "nvmf_subsystem_allow_any_host", 00:04:12.324 "nvmf_subsystem_remove_host", 00:04:12.324 "nvmf_subsystem_add_host", 00:04:12.324 "nvmf_ns_remove_host", 00:04:12.324 "nvmf_ns_add_host", 00:04:12.324 "nvmf_subsystem_remove_ns", 00:04:12.324 "nvmf_subsystem_add_ns", 00:04:12.324 "nvmf_subsystem_listener_set_ana_state", 00:04:12.324 "nvmf_discovery_get_referrals", 00:04:12.324 "nvmf_discovery_remove_referral", 00:04:12.324 "nvmf_discovery_add_referral", 00:04:12.324 "nvmf_subsystem_remove_listener", 00:04:12.324 "nvmf_subsystem_add_listener", 00:04:12.324 "nvmf_delete_subsystem", 00:04:12.324 "nvmf_create_subsystem", 00:04:12.324 "nvmf_get_subsystems", 00:04:12.324 "env_dpdk_get_mem_stats", 00:04:12.324 "nbd_get_disks", 00:04:12.324 "nbd_stop_disk", 00:04:12.324 "nbd_start_disk", 00:04:12.324 "ublk_recover_disk", 00:04:12.324 "ublk_get_disks", 00:04:12.324 "ublk_stop_disk", 00:04:12.324 "ublk_start_disk", 00:04:12.324 "ublk_destroy_target", 00:04:12.324 "ublk_create_target", 00:04:12.324 "virtio_blk_create_transport", 00:04:12.324 "virtio_blk_get_transports", 00:04:12.324 "vhost_controller_set_coalescing", 00:04:12.324 "vhost_get_controllers", 00:04:12.324 "vhost_delete_controller", 00:04:12.324 "vhost_create_blk_controller", 00:04:12.324 "vhost_scsi_controller_remove_target", 00:04:12.324 "vhost_scsi_controller_add_target", 00:04:12.324 "vhost_start_scsi_controller", 00:04:12.324 "vhost_create_scsi_controller", 00:04:12.324 "thread_set_cpumask", 00:04:12.324 "framework_get_scheduler", 00:04:12.324 "framework_set_scheduler", 00:04:12.324 "framework_get_reactors", 00:04:12.324 "thread_get_io_channels", 00:04:12.324 "thread_get_pollers", 00:04:12.324 "thread_get_stats", 00:04:12.324 "framework_monitor_context_switch", 00:04:12.324 "spdk_kill_instance", 00:04:12.324 "log_enable_timestamps", 00:04:12.324 "log_get_flags", 00:04:12.324 "log_clear_flag", 00:04:12.324 "log_set_flag", 00:04:12.324 "log_get_level", 00:04:12.324 "log_set_level", 00:04:12.324 "log_get_print_level", 00:04:12.324 "log_set_print_level", 00:04:12.324 "framework_enable_cpumask_locks", 00:04:12.324 "framework_disable_cpumask_locks", 00:04:12.324 "framework_wait_init", 00:04:12.324 "framework_start_init", 00:04:12.324 "scsi_get_devices", 00:04:12.324 "bdev_get_histogram", 00:04:12.324 "bdev_enable_histogram", 00:04:12.324 "bdev_set_qos_limit", 00:04:12.324 "bdev_set_qd_sampling_period", 00:04:12.324 "bdev_get_bdevs", 00:04:12.324 "bdev_reset_iostat", 00:04:12.324 "bdev_get_iostat", 00:04:12.324 "bdev_examine", 00:04:12.324 "bdev_wait_for_examine", 00:04:12.324 "bdev_set_options", 00:04:12.324 "notify_get_notifications", 00:04:12.324 "notify_get_types", 00:04:12.324 "accel_get_stats", 00:04:12.324 "accel_set_options", 00:04:12.324 "accel_set_driver", 00:04:12.324 "accel_crypto_key_destroy", 00:04:12.324 "accel_crypto_keys_get", 00:04:12.324 "accel_crypto_key_create", 00:04:12.324 "accel_assign_opc", 00:04:12.324 "accel_get_module_info", 00:04:12.324 "accel_get_opc_assignments", 00:04:12.324 "vmd_rescan", 00:04:12.324 "vmd_remove_device", 00:04:12.324 "vmd_enable", 00:04:12.324 "sock_set_default_impl", 00:04:12.324 "sock_impl_set_options", 00:04:12.324 "sock_impl_get_options", 00:04:12.324 "iobuf_get_stats", 00:04:12.324 "iobuf_set_options", 00:04:12.324 "keyring_get_keys", 00:04:12.324 "framework_get_pci_devices", 00:04:12.324 "framework_get_config", 00:04:12.325 "framework_get_subsystems", 00:04:12.325 "vfu_tgt_set_base_path", 00:04:12.325 "trace_get_info", 00:04:12.325 "trace_get_tpoint_group_mask", 00:04:12.325 "trace_disable_tpoint_group", 00:04:12.325 "trace_enable_tpoint_group", 00:04:12.325 "trace_clear_tpoint_mask", 00:04:12.325 "trace_set_tpoint_mask", 00:04:12.325 "spdk_get_version", 00:04:12.325 "rpc_get_methods" 00:04:12.325 ] 00:04:12.325 13:33:15 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:12.325 13:33:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:12.325 13:33:15 -- common/autotest_common.sh@10 -- # set +x 00:04:12.584 13:33:15 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:12.584 13:33:15 -- spdkcli/tcp.sh@38 -- # killprocess 2487875 00:04:12.584 13:33:15 -- common/autotest_common.sh@936 -- # '[' -z 2487875 ']' 00:04:12.584 13:33:15 -- common/autotest_common.sh@940 -- # kill -0 2487875 00:04:12.584 13:33:15 -- common/autotest_common.sh@941 -- # uname 00:04:12.584 13:33:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:12.584 13:33:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2487875 00:04:12.584 13:33:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:12.584 13:33:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:12.584 13:33:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2487875' 00:04:12.584 killing process with pid 2487875 00:04:12.584 13:33:15 -- common/autotest_common.sh@955 -- # kill 2487875 00:04:12.584 13:33:15 -- common/autotest_common.sh@960 -- # wait 2487875 00:04:12.843 00:04:12.843 real 0m1.761s 00:04:12.843 user 0m3.330s 00:04:12.843 sys 0m0.487s 00:04:12.843 13:33:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:12.843 13:33:15 -- common/autotest_common.sh@10 -- # set +x 00:04:12.843 ************************************ 00:04:12.843 END TEST spdkcli_tcp 00:04:12.843 ************************************ 00:04:13.102 13:33:15 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:13.102 13:33:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.102 13:33:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.102 13:33:15 -- common/autotest_common.sh@10 -- # set +x 00:04:13.102 ************************************ 00:04:13.102 START TEST dpdk_mem_utility 00:04:13.102 ************************************ 00:04:13.102 13:33:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:13.102 * Looking for test storage... 00:04:13.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:13.102 13:33:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:13.102 13:33:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2488210 00:04:13.102 13:33:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.102 13:33:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2488210 00:04:13.102 13:33:15 -- common/autotest_common.sh@817 -- # '[' -z 2488210 ']' 00:04:13.102 13:33:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.102 13:33:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:13.102 13:33:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.102 13:33:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:13.102 13:33:15 -- common/autotest_common.sh@10 -- # set +x 00:04:13.102 [2024-04-18 13:33:15.853335] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:13.102 [2024-04-18 13:33:15.853440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488210 ] 00:04:13.102 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.362 [2024-04-18 13:33:15.914256] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.362 [2024-04-18 13:33:16.019175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.623 13:33:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:13.623 13:33:16 -- common/autotest_common.sh@850 -- # return 0 00:04:13.623 13:33:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:13.623 13:33:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:13.623 13:33:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.623 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:04:13.623 { 00:04:13.623 "filename": "/tmp/spdk_mem_dump.txt" 00:04:13.623 } 00:04:13.623 13:33:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.623 13:33:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:13.623 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:13.623 1 heaps totaling size 814.000000 MiB 00:04:13.623 size: 814.000000 MiB heap id: 0 00:04:13.623 end heaps---------- 00:04:13.623 8 mempools totaling size 598.116089 MiB 00:04:13.623 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:13.623 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:13.623 size: 84.521057 MiB name: bdev_io_2488210 00:04:13.623 size: 51.011292 MiB name: evtpool_2488210 00:04:13.624 size: 50.003479 MiB name: msgpool_2488210 00:04:13.624 size: 21.763794 MiB name: PDU_Pool 00:04:13.624 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:13.624 size: 0.026123 MiB name: Session_Pool 00:04:13.624 end mempools------- 00:04:13.624 6 memzones totaling size 4.142822 MiB 00:04:13.624 size: 1.000366 MiB name: RG_ring_0_2488210 00:04:13.624 size: 1.000366 MiB name: RG_ring_1_2488210 00:04:13.624 size: 1.000366 MiB name: RG_ring_4_2488210 00:04:13.624 size: 1.000366 MiB name: RG_ring_5_2488210 00:04:13.624 size: 0.125366 MiB name: RG_ring_2_2488210 00:04:13.624 size: 0.015991 MiB name: RG_ring_3_2488210 00:04:13.624 end memzones------- 00:04:13.624 13:33:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:13.624 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:13.624 list of free elements. size: 12.519348 MiB 00:04:13.624 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:13.624 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:13.624 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:13.624 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:13.624 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:13.624 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:13.624 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:13.624 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:13.624 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:13.624 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:13.624 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:13.624 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:13.624 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:13.624 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:13.624 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:13.624 list of standard malloc elements. size: 199.218079 MiB 00:04:13.624 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:13.624 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:13.624 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:13.624 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:13.624 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:13.624 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:13.624 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:13.624 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:13.624 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:13.624 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:13.624 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:13.624 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:13.624 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:13.624 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:13.624 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:13.624 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:13.624 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:13.624 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:13.624 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:13.624 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:13.624 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:13.624 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:13.624 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:13.624 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:13.624 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:13.624 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:13.624 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:13.624 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:13.624 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:13.624 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:13.624 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:13.624 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:13.624 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:13.624 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:13.624 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:13.624 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:13.624 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:13.624 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:13.624 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:13.624 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:13.624 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:13.624 list of memzone associated elements. size: 602.262573 MiB 00:04:13.624 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:13.624 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:13.624 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:13.624 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:13.624 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:13.624 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2488210_0 00:04:13.624 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:13.624 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2488210_0 00:04:13.624 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:13.624 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2488210_0 00:04:13.624 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:13.624 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:13.624 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:13.624 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:13.624 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:13.624 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2488210 00:04:13.624 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:13.624 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2488210 00:04:13.624 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:13.624 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2488210 00:04:13.624 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:13.624 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:13.624 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:13.624 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:13.624 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:13.624 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:13.624 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:13.624 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:13.624 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:13.625 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2488210 00:04:13.625 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:13.625 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2488210 00:04:13.625 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:13.625 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2488210 00:04:13.625 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:13.625 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2488210 00:04:13.625 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:13.625 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2488210 00:04:13.625 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:13.625 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:13.625 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:13.625 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:13.625 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:13.625 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:13.625 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:13.625 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2488210 00:04:13.625 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:13.625 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:13.625 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:13.625 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:13.625 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:13.625 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2488210 00:04:13.625 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:13.625 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:13.625 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:13.625 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2488210 00:04:13.625 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:13.625 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2488210 00:04:13.625 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:13.625 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:13.625 13:33:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:13.625 13:33:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2488210 00:04:13.625 13:33:16 -- common/autotest_common.sh@936 -- # '[' -z 2488210 ']' 00:04:13.625 13:33:16 -- common/autotest_common.sh@940 -- # kill -0 2488210 00:04:13.625 13:33:16 -- common/autotest_common.sh@941 -- # uname 00:04:13.625 13:33:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:13.625 13:33:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2488210 00:04:13.885 13:33:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:13.885 13:33:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:13.885 13:33:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2488210' 00:04:13.885 killing process with pid 2488210 00:04:13.885 13:33:16 -- common/autotest_common.sh@955 -- # kill 2488210 00:04:13.885 13:33:16 -- common/autotest_common.sh@960 -- # wait 2488210 00:04:14.145 00:04:14.145 real 0m1.138s 00:04:14.145 user 0m1.093s 00:04:14.145 sys 0m0.411s 00:04:14.145 13:33:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:14.145 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:04:14.145 ************************************ 00:04:14.145 END TEST dpdk_mem_utility 00:04:14.145 ************************************ 00:04:14.145 13:33:16 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:14.145 13:33:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.145 13:33:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.145 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:04:14.404 ************************************ 00:04:14.404 START TEST event 00:04:14.404 ************************************ 00:04:14.404 13:33:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:14.404 * Looking for test storage... 00:04:14.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:14.404 13:33:17 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:14.404 13:33:17 -- bdev/nbd_common.sh@6 -- # set -e 00:04:14.404 13:33:17 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:14.404 13:33:17 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:14.404 13:33:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.404 13:33:17 -- common/autotest_common.sh@10 -- # set +x 00:04:14.404 ************************************ 00:04:14.404 START TEST event_perf 00:04:14.404 ************************************ 00:04:14.404 13:33:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:14.404 Running I/O for 1 seconds...[2024-04-18 13:33:17.176240] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:14.404 [2024-04-18 13:33:17.176302] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488422 ] 00:04:14.404 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.664 [2024-04-18 13:33:17.245536] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:14.664 [2024-04-18 13:33:17.367389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.664 [2024-04-18 13:33:17.367444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:14.664 [2024-04-18 13:33:17.367518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:14.664 [2024-04-18 13:33:17.367520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.041 Running I/O for 1 seconds... 00:04:16.041 lcore 0: 237821 00:04:16.041 lcore 1: 237821 00:04:16.041 lcore 2: 237822 00:04:16.041 lcore 3: 237821 00:04:16.041 done. 00:04:16.041 00:04:16.041 real 0m1.331s 00:04:16.041 user 0m4.231s 00:04:16.041 sys 0m0.092s 00:04:16.041 13:33:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:16.042 13:33:18 -- common/autotest_common.sh@10 -- # set +x 00:04:16.042 ************************************ 00:04:16.042 END TEST event_perf 00:04:16.042 ************************************ 00:04:16.042 13:33:18 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:16.042 13:33:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:16.042 13:33:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.042 13:33:18 -- common/autotest_common.sh@10 -- # set +x 00:04:16.042 ************************************ 00:04:16.042 START TEST event_reactor 00:04:16.042 ************************************ 00:04:16.042 13:33:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:16.042 [2024-04-18 13:33:18.633242] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:16.042 [2024-04-18 13:33:18.633307] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488583 ] 00:04:16.042 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.042 [2024-04-18 13:33:18.698141] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.042 [2024-04-18 13:33:18.811963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.422 test_start 00:04:17.422 oneshot 00:04:17.422 tick 100 00:04:17.422 tick 100 00:04:17.422 tick 250 00:04:17.422 tick 100 00:04:17.422 tick 100 00:04:17.422 tick 100 00:04:17.422 tick 250 00:04:17.422 tick 500 00:04:17.422 tick 100 00:04:17.422 tick 100 00:04:17.422 tick 250 00:04:17.422 tick 100 00:04:17.422 tick 100 00:04:17.422 test_end 00:04:17.422 00:04:17.422 real 0m1.316s 00:04:17.422 user 0m1.226s 00:04:17.422 sys 0m0.085s 00:04:17.422 13:33:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:17.422 13:33:19 -- common/autotest_common.sh@10 -- # set +x 00:04:17.422 ************************************ 00:04:17.422 END TEST event_reactor 00:04:17.422 ************************************ 00:04:17.422 13:33:19 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:17.422 13:33:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:17.422 13:33:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.422 13:33:19 -- common/autotest_common.sh@10 -- # set +x 00:04:17.422 ************************************ 00:04:17.422 START TEST event_reactor_perf 00:04:17.422 ************************************ 00:04:17.422 13:33:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:17.422 [2024-04-18 13:33:20.070487] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:17.422 [2024-04-18 13:33:20.070548] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488870 ] 00:04:17.422 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.422 [2024-04-18 13:33:20.135032] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.682 [2024-04-18 13:33:20.255151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.620 test_start 00:04:18.620 test_end 00:04:18.620 Performance: 358786 events per second 00:04:18.620 00:04:18.620 real 0m1.321s 00:04:18.620 user 0m1.232s 00:04:18.620 sys 0m0.083s 00:04:18.620 13:33:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:18.620 13:33:21 -- common/autotest_common.sh@10 -- # set +x 00:04:18.620 ************************************ 00:04:18.620 END TEST event_reactor_perf 00:04:18.620 ************************************ 00:04:18.620 13:33:21 -- event/event.sh@49 -- # uname -s 00:04:18.620 13:33:21 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:18.620 13:33:21 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:18.620 13:33:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.620 13:33:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.620 13:33:21 -- common/autotest_common.sh@10 -- # set +x 00:04:18.877 ************************************ 00:04:18.877 START TEST event_scheduler 00:04:18.877 ************************************ 00:04:18.877 13:33:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:18.877 * Looking for test storage... 00:04:18.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:18.877 13:33:21 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:18.877 13:33:21 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2489060 00:04:18.877 13:33:21 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:18.877 13:33:21 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.877 13:33:21 -- scheduler/scheduler.sh@37 -- # waitforlisten 2489060 00:04:18.877 13:33:21 -- common/autotest_common.sh@817 -- # '[' -z 2489060 ']' 00:04:18.877 13:33:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.877 13:33:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:18.877 13:33:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.877 13:33:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:18.877 13:33:21 -- common/autotest_common.sh@10 -- # set +x 00:04:18.877 [2024-04-18 13:33:21.597494] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:18.877 [2024-04-18 13:33:21.597566] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2489060 ] 00:04:18.877 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.877 [2024-04-18 13:33:21.654445] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:19.135 [2024-04-18 13:33:21.761357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.135 [2024-04-18 13:33:21.761413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.135 [2024-04-18 13:33:21.761479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:19.135 [2024-04-18 13:33:21.761482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:19.135 13:33:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:19.135 13:33:21 -- common/autotest_common.sh@850 -- # return 0 00:04:19.135 13:33:21 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:19.135 13:33:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.135 13:33:21 -- common/autotest_common.sh@10 -- # set +x 00:04:19.135 POWER: Env isn't set yet! 00:04:19.135 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:19.135 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:19.135 POWER: Cannot get available frequencies of lcore 0 00:04:19.135 POWER: Attempting to initialise PSTAT power management... 00:04:19.135 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:19.135 POWER: Initialized successfully for lcore 0 power management 00:04:19.135 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:19.135 POWER: Initialized successfully for lcore 1 power management 00:04:19.135 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:19.135 POWER: Initialized successfully for lcore 2 power management 00:04:19.135 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:19.135 POWER: Initialized successfully for lcore 3 power management 00:04:19.135 13:33:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.135 13:33:21 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:19.135 13:33:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.135 13:33:21 -- common/autotest_common.sh@10 -- # set +x 00:04:19.135 [2024-04-18 13:33:21.939870] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:19.135 13:33:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.135 13:33:21 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:19.135 13:33:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.135 13:33:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.135 13:33:21 -- common/autotest_common.sh@10 -- # set +x 00:04:19.395 ************************************ 00:04:19.395 START TEST scheduler_create_thread 00:04:19.395 ************************************ 00:04:19.395 13:33:22 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:04:19.395 13:33:22 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:19.395 13:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.395 13:33:22 -- common/autotest_common.sh@10 -- # set +x 00:04:19.395 2 00:04:19.395 13:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.395 13:33:22 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:19.395 13:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.395 13:33:22 -- common/autotest_common.sh@10 -- # set +x 00:04:19.395 3 00:04:19.395 13:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.395 13:33:22 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:19.395 13:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.395 13:33:22 -- common/autotest_common.sh@10 -- # set +x 00:04:19.395 4 00:04:19.395 13:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.395 13:33:22 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:19.395 13:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.395 13:33:22 -- common/autotest_common.sh@10 -- # set +x 00:04:19.395 5 00:04:19.395 13:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.395 13:33:22 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:19.395 13:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.395 13:33:22 -- common/autotest_common.sh@10 -- # set +x 00:04:19.395 6 00:04:19.395 13:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.395 13:33:22 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:19.395 13:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.395 13:33:22 -- common/autotest_common.sh@10 -- # set +x 00:04:19.395 7 00:04:19.395 13:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.395 13:33:22 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:19.395 13:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.395 13:33:22 -- common/autotest_common.sh@10 -- # set +x 00:04:19.395 8 00:04:19.395 13:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.395 13:33:22 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:19.395 13:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.395 13:33:22 -- common/autotest_common.sh@10 -- # set +x 00:04:19.395 9 00:04:19.395 13:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.395 13:33:22 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:19.395 13:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.395 13:33:22 -- common/autotest_common.sh@10 -- # set +x 00:04:19.395 10 00:04:19.395 13:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.395 13:33:22 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:19.395 13:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.395 13:33:22 -- common/autotest_common.sh@10 -- # set +x 00:04:19.395 13:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.395 13:33:22 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:19.395 13:33:22 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:19.395 13:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.395 13:33:22 -- common/autotest_common.sh@10 -- # set +x 00:04:19.395 13:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.395 13:33:22 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:19.395 13:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.395 13:33:22 -- common/autotest_common.sh@10 -- # set +x 00:04:21.304 13:33:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:21.304 13:33:23 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:21.304 13:33:23 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:21.304 13:33:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:21.304 13:33:23 -- common/autotest_common.sh@10 -- # set +x 00:04:21.873 13:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:21.873 00:04:21.873 real 0m2.617s 00:04:21.873 user 0m0.011s 00:04:21.873 sys 0m0.003s 00:04:21.873 13:33:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:21.873 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:21.873 ************************************ 00:04:21.873 END TEST scheduler_create_thread 00:04:21.873 ************************************ 00:04:22.133 13:33:24 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:22.133 13:33:24 -- scheduler/scheduler.sh@46 -- # killprocess 2489060 00:04:22.133 13:33:24 -- common/autotest_common.sh@936 -- # '[' -z 2489060 ']' 00:04:22.133 13:33:24 -- common/autotest_common.sh@940 -- # kill -0 2489060 00:04:22.133 13:33:24 -- common/autotest_common.sh@941 -- # uname 00:04:22.133 13:33:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:22.133 13:33:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2489060 00:04:22.133 13:33:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:22.133 13:33:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:22.133 13:33:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2489060' 00:04:22.133 killing process with pid 2489060 00:04:22.133 13:33:24 -- common/autotest_common.sh@955 -- # kill 2489060 00:04:22.133 13:33:24 -- common/autotest_common.sh@960 -- # wait 2489060 00:04:22.393 [2024-04-18 13:33:25.139196] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:22.652 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:22.652 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:22.652 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:22.652 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:22.652 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:22.652 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:22.652 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:22.652 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:22.652 00:04:22.652 real 0m3.917s 00:04:22.652 user 0m5.901s 00:04:22.652 sys 0m0.388s 00:04:22.652 13:33:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.652 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:04:22.652 ************************************ 00:04:22.652 END TEST event_scheduler 00:04:22.652 ************************************ 00:04:22.652 13:33:25 -- event/event.sh@51 -- # modprobe -n nbd 00:04:22.652 13:33:25 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:22.652 13:33:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.652 13:33:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.652 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:04:22.912 ************************************ 00:04:22.912 START TEST app_repeat 00:04:22.912 ************************************ 00:04:22.912 13:33:25 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:04:22.912 13:33:25 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.912 13:33:25 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.912 13:33:25 -- event/event.sh@13 -- # local nbd_list 00:04:22.912 13:33:25 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.912 13:33:25 -- event/event.sh@14 -- # local bdev_list 00:04:22.912 13:33:25 -- event/event.sh@15 -- # local repeat_times=4 00:04:22.912 13:33:25 -- event/event.sh@17 -- # modprobe nbd 00:04:22.912 13:33:25 -- event/event.sh@19 -- # repeat_pid=2489546 00:04:22.912 13:33:25 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:22.912 13:33:25 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.912 13:33:25 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2489546' 00:04:22.912 Process app_repeat pid: 2489546 00:04:22.912 13:33:25 -- event/event.sh@23 -- # for i in {0..2} 00:04:22.912 13:33:25 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:22.912 spdk_app_start Round 0 00:04:22.912 13:33:25 -- event/event.sh@25 -- # waitforlisten 2489546 /var/tmp/spdk-nbd.sock 00:04:22.912 13:33:25 -- common/autotest_common.sh@817 -- # '[' -z 2489546 ']' 00:04:22.912 13:33:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:22.912 13:33:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:22.912 13:33:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:22.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:22.912 13:33:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:22.912 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:04:22.912 [2024-04-18 13:33:25.567051] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:22.912 [2024-04-18 13:33:25.567122] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2489546 ] 00:04:22.912 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.912 [2024-04-18 13:33:25.626793] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.171 [2024-04-18 13:33:25.736935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.171 [2024-04-18 13:33:25.736940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.171 13:33:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:23.171 13:33:25 -- common/autotest_common.sh@850 -- # return 0 00:04:23.171 13:33:25 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.429 Malloc0 00:04:23.429 13:33:26 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.686 Malloc1 00:04:23.686 13:33:26 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.686 13:33:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.686 13:33:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.686 13:33:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:23.686 13:33:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.686 13:33:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:23.686 13:33:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.686 13:33:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.686 13:33:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.686 13:33:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:23.686 13:33:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.686 13:33:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:23.686 13:33:26 -- bdev/nbd_common.sh@12 -- # local i 00:04:23.686 13:33:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:23.686 13:33:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.686 13:33:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:23.944 /dev/nbd0 00:04:23.944 13:33:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:23.944 13:33:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:23.944 13:33:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:23.944 13:33:26 -- common/autotest_common.sh@855 -- # local i 00:04:23.944 13:33:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:23.944 13:33:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:23.944 13:33:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:23.944 13:33:26 -- common/autotest_common.sh@859 -- # break 00:04:23.944 13:33:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:23.945 13:33:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:23.945 13:33:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.945 1+0 records in 00:04:23.945 1+0 records out 00:04:23.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160327 s, 25.5 MB/s 00:04:23.945 13:33:26 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.945 13:33:26 -- common/autotest_common.sh@872 -- # size=4096 00:04:23.945 13:33:26 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.945 13:33:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:23.945 13:33:26 -- common/autotest_common.sh@875 -- # return 0 00:04:23.945 13:33:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.945 13:33:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.945 13:33:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:24.202 /dev/nbd1 00:04:24.202 13:33:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:24.202 13:33:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:24.202 13:33:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:24.202 13:33:26 -- common/autotest_common.sh@855 -- # local i 00:04:24.202 13:33:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:24.202 13:33:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:24.202 13:33:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:24.202 13:33:26 -- common/autotest_common.sh@859 -- # break 00:04:24.202 13:33:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:24.202 13:33:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:24.202 13:33:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.202 1+0 records in 00:04:24.202 1+0 records out 00:04:24.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191457 s, 21.4 MB/s 00:04:24.202 13:33:26 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.202 13:33:26 -- common/autotest_common.sh@872 -- # size=4096 00:04:24.202 13:33:26 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.202 13:33:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:24.202 13:33:26 -- common/autotest_common.sh@875 -- # return 0 00:04:24.202 13:33:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.202 13:33:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.202 13:33:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.202 13:33:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.202 13:33:26 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:24.460 { 00:04:24.460 "nbd_device": "/dev/nbd0", 00:04:24.460 "bdev_name": "Malloc0" 00:04:24.460 }, 00:04:24.460 { 00:04:24.460 "nbd_device": "/dev/nbd1", 00:04:24.460 "bdev_name": "Malloc1" 00:04:24.460 } 00:04:24.460 ]' 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:24.460 { 00:04:24.460 "nbd_device": "/dev/nbd0", 00:04:24.460 "bdev_name": "Malloc0" 00:04:24.460 }, 00:04:24.460 { 00:04:24.460 "nbd_device": "/dev/nbd1", 00:04:24.460 "bdev_name": "Malloc1" 00:04:24.460 } 00:04:24.460 ]' 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:24.460 /dev/nbd1' 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:24.460 /dev/nbd1' 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@65 -- # count=2 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@95 -- # count=2 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:24.460 256+0 records in 00:04:24.460 256+0 records out 00:04:24.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00461778 s, 227 MB/s 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:24.460 256+0 records in 00:04:24.460 256+0 records out 00:04:24.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242234 s, 43.3 MB/s 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:24.460 256+0 records in 00:04:24.460 256+0 records out 00:04:24.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285172 s, 36.8 MB/s 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:24.460 13:33:27 -- bdev/nbd_common.sh@51 -- # local i 00:04:24.461 13:33:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.461 13:33:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:24.718 13:33:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:24.718 13:33:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:24.718 13:33:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:24.718 13:33:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.718 13:33:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.718 13:33:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:24.718 13:33:27 -- bdev/nbd_common.sh@41 -- # break 00:04:24.718 13:33:27 -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.718 13:33:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.719 13:33:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:24.976 13:33:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:24.976 13:33:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:24.976 13:33:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:24.976 13:33:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.976 13:33:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.976 13:33:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:24.976 13:33:27 -- bdev/nbd_common.sh@41 -- # break 00:04:24.976 13:33:27 -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.976 13:33:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.976 13:33:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.976 13:33:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.234 13:33:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:25.234 13:33:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:25.234 13:33:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.234 13:33:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:25.234 13:33:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:25.234 13:33:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.234 13:33:28 -- bdev/nbd_common.sh@65 -- # true 00:04:25.234 13:33:28 -- bdev/nbd_common.sh@65 -- # count=0 00:04:25.234 13:33:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:25.234 13:33:28 -- bdev/nbd_common.sh@104 -- # count=0 00:04:25.234 13:33:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:25.234 13:33:28 -- bdev/nbd_common.sh@109 -- # return 0 00:04:25.234 13:33:28 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:25.494 13:33:28 -- event/event.sh@35 -- # sleep 3 00:04:25.797 [2024-04-18 13:33:28.572356] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.057 [2024-04-18 13:33:28.684749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.057 [2024-04-18 13:33:28.684754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.057 [2024-04-18 13:33:28.748225] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:26.057 [2024-04-18 13:33:28.748301] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:28.593 13:33:31 -- event/event.sh@23 -- # for i in {0..2} 00:04:28.593 13:33:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:28.593 spdk_app_start Round 1 00:04:28.593 13:33:31 -- event/event.sh@25 -- # waitforlisten 2489546 /var/tmp/spdk-nbd.sock 00:04:28.593 13:33:31 -- common/autotest_common.sh@817 -- # '[' -z 2489546 ']' 00:04:28.593 13:33:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.593 13:33:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:28.593 13:33:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.593 13:33:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:28.593 13:33:31 -- common/autotest_common.sh@10 -- # set +x 00:04:28.851 13:33:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:28.851 13:33:31 -- common/autotest_common.sh@850 -- # return 0 00:04:28.851 13:33:31 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.109 Malloc0 00:04:29.109 13:33:31 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.367 Malloc1 00:04:29.367 13:33:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.367 13:33:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.367 13:33:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.367 13:33:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:29.367 13:33:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.367 13:33:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:29.367 13:33:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.367 13:33:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.367 13:33:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.367 13:33:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:29.367 13:33:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.367 13:33:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:29.367 13:33:32 -- bdev/nbd_common.sh@12 -- # local i 00:04:29.367 13:33:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:29.368 13:33:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.368 13:33:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:29.626 /dev/nbd0 00:04:29.626 13:33:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:29.626 13:33:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:29.626 13:33:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:29.626 13:33:32 -- common/autotest_common.sh@855 -- # local i 00:04:29.626 13:33:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:29.626 13:33:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:29.626 13:33:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:29.626 13:33:32 -- common/autotest_common.sh@859 -- # break 00:04:29.626 13:33:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:29.626 13:33:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:29.626 13:33:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.626 1+0 records in 00:04:29.626 1+0 records out 00:04:29.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021551 s, 19.0 MB/s 00:04:29.626 13:33:32 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.626 13:33:32 -- common/autotest_common.sh@872 -- # size=4096 00:04:29.626 13:33:32 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.626 13:33:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:29.626 13:33:32 -- common/autotest_common.sh@875 -- # return 0 00:04:29.626 13:33:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.626 13:33:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.626 13:33:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:29.885 /dev/nbd1 00:04:29.885 13:33:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:29.885 13:33:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:29.885 13:33:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:29.885 13:33:32 -- common/autotest_common.sh@855 -- # local i 00:04:29.885 13:33:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:29.885 13:33:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:29.885 13:33:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:29.885 13:33:32 -- common/autotest_common.sh@859 -- # break 00:04:29.885 13:33:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:29.885 13:33:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:29.885 13:33:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.885 1+0 records in 00:04:29.885 1+0 records out 00:04:29.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00015543 s, 26.4 MB/s 00:04:29.885 13:33:32 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.885 13:33:32 -- common/autotest_common.sh@872 -- # size=4096 00:04:29.885 13:33:32 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.885 13:33:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:29.885 13:33:32 -- common/autotest_common.sh@875 -- # return 0 00:04:29.885 13:33:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.885 13:33:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.885 13:33:32 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.885 13:33:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.885 13:33:32 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.143 { 00:04:30.143 "nbd_device": "/dev/nbd0", 00:04:30.143 "bdev_name": "Malloc0" 00:04:30.143 }, 00:04:30.143 { 00:04:30.143 "nbd_device": "/dev/nbd1", 00:04:30.143 "bdev_name": "Malloc1" 00:04:30.143 } 00:04:30.143 ]' 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.143 { 00:04:30.143 "nbd_device": "/dev/nbd0", 00:04:30.143 "bdev_name": "Malloc0" 00:04:30.143 }, 00:04:30.143 { 00:04:30.143 "nbd_device": "/dev/nbd1", 00:04:30.143 "bdev_name": "Malloc1" 00:04:30.143 } 00:04:30.143 ]' 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:30.143 /dev/nbd1' 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:30.143 /dev/nbd1' 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@65 -- # count=2 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@95 -- # count=2 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:30.143 13:33:32 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:30.144 256+0 records in 00:04:30.144 256+0 records out 00:04:30.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412211 s, 254 MB/s 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:30.144 256+0 records in 00:04:30.144 256+0 records out 00:04:30.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278381 s, 37.7 MB/s 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.144 256+0 records in 00:04:30.144 256+0 records out 00:04:30.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236195 s, 44.4 MB/s 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@51 -- # local i 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.144 13:33:32 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:30.402 13:33:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:30.402 13:33:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:30.402 13:33:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:30.402 13:33:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.402 13:33:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.402 13:33:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:30.402 13:33:33 -- bdev/nbd_common.sh@41 -- # break 00:04:30.402 13:33:33 -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.402 13:33:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.402 13:33:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:30.660 13:33:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:30.660 13:33:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:30.660 13:33:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:30.660 13:33:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.660 13:33:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.660 13:33:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:30.660 13:33:33 -- bdev/nbd_common.sh@41 -- # break 00:04:30.660 13:33:33 -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.660 13:33:33 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.660 13:33:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.660 13:33:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.918 13:33:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:30.918 13:33:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:30.918 13:33:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.918 13:33:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:30.918 13:33:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:30.918 13:33:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.918 13:33:33 -- bdev/nbd_common.sh@65 -- # true 00:04:30.918 13:33:33 -- bdev/nbd_common.sh@65 -- # count=0 00:04:30.918 13:33:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:30.918 13:33:33 -- bdev/nbd_common.sh@104 -- # count=0 00:04:30.918 13:33:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:30.918 13:33:33 -- bdev/nbd_common.sh@109 -- # return 0 00:04:30.918 13:33:33 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:31.176 13:33:33 -- event/event.sh@35 -- # sleep 3 00:04:31.744 [2024-04-18 13:33:34.250776] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.744 [2024-04-18 13:33:34.365066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.744 [2024-04-18 13:33:34.365072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.744 [2024-04-18 13:33:34.429655] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:31.744 [2024-04-18 13:33:34.429734] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:34.290 13:33:36 -- event/event.sh@23 -- # for i in {0..2} 00:04:34.290 13:33:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:34.290 spdk_app_start Round 2 00:04:34.290 13:33:36 -- event/event.sh@25 -- # waitforlisten 2489546 /var/tmp/spdk-nbd.sock 00:04:34.290 13:33:36 -- common/autotest_common.sh@817 -- # '[' -z 2489546 ']' 00:04:34.290 13:33:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:34.290 13:33:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:34.290 13:33:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:34.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:34.290 13:33:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:34.290 13:33:36 -- common/autotest_common.sh@10 -- # set +x 00:04:34.549 13:33:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:34.549 13:33:37 -- common/autotest_common.sh@850 -- # return 0 00:04:34.549 13:33:37 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.807 Malloc0 00:04:34.807 13:33:37 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.065 Malloc1 00:04:35.065 13:33:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.065 13:33:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.065 13:33:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.065 13:33:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:35.065 13:33:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.066 13:33:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:35.066 13:33:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.066 13:33:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.066 13:33:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.066 13:33:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:35.066 13:33:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.066 13:33:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:35.066 13:33:37 -- bdev/nbd_common.sh@12 -- # local i 00:04:35.066 13:33:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:35.066 13:33:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.066 13:33:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:35.324 /dev/nbd0 00:04:35.324 13:33:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:35.324 13:33:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:35.324 13:33:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:35.324 13:33:37 -- common/autotest_common.sh@855 -- # local i 00:04:35.324 13:33:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:35.324 13:33:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:35.324 13:33:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:35.324 13:33:37 -- common/autotest_common.sh@859 -- # break 00:04:35.324 13:33:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:35.324 13:33:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:35.324 13:33:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.324 1+0 records in 00:04:35.324 1+0 records out 00:04:35.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000143098 s, 28.6 MB/s 00:04:35.324 13:33:37 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.324 13:33:37 -- common/autotest_common.sh@872 -- # size=4096 00:04:35.324 13:33:37 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.324 13:33:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:35.324 13:33:37 -- common/autotest_common.sh@875 -- # return 0 00:04:35.324 13:33:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.324 13:33:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.324 13:33:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:35.582 /dev/nbd1 00:04:35.582 13:33:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:35.582 13:33:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:35.582 13:33:38 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:35.582 13:33:38 -- common/autotest_common.sh@855 -- # local i 00:04:35.582 13:33:38 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:35.582 13:33:38 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:35.582 13:33:38 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:35.582 13:33:38 -- common/autotest_common.sh@859 -- # break 00:04:35.582 13:33:38 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:35.582 13:33:38 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:35.582 13:33:38 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.582 1+0 records in 00:04:35.582 1+0 records out 00:04:35.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019895 s, 20.6 MB/s 00:04:35.582 13:33:38 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.582 13:33:38 -- common/autotest_common.sh@872 -- # size=4096 00:04:35.582 13:33:38 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.582 13:33:38 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:35.582 13:33:38 -- common/autotest_common.sh@875 -- # return 0 00:04:35.582 13:33:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.582 13:33:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.582 13:33:38 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.582 13:33:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.582 13:33:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:35.841 { 00:04:35.841 "nbd_device": "/dev/nbd0", 00:04:35.841 "bdev_name": "Malloc0" 00:04:35.841 }, 00:04:35.841 { 00:04:35.841 "nbd_device": "/dev/nbd1", 00:04:35.841 "bdev_name": "Malloc1" 00:04:35.841 } 00:04:35.841 ]' 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:35.841 { 00:04:35.841 "nbd_device": "/dev/nbd0", 00:04:35.841 "bdev_name": "Malloc0" 00:04:35.841 }, 00:04:35.841 { 00:04:35.841 "nbd_device": "/dev/nbd1", 00:04:35.841 "bdev_name": "Malloc1" 00:04:35.841 } 00:04:35.841 ]' 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:35.841 /dev/nbd1' 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:35.841 /dev/nbd1' 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@65 -- # count=2 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@95 -- # count=2 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:35.841 256+0 records in 00:04:35.841 256+0 records out 00:04:35.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0051005 s, 206 MB/s 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:35.841 256+0 records in 00:04:35.841 256+0 records out 00:04:35.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224117 s, 46.8 MB/s 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:35.841 256+0 records in 00:04:35.841 256+0 records out 00:04:35.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274862 s, 38.1 MB/s 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@51 -- # local i 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.841 13:33:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:36.101 13:33:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:36.101 13:33:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:36.101 13:33:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:36.101 13:33:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.101 13:33:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.101 13:33:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:36.101 13:33:38 -- bdev/nbd_common.sh@41 -- # break 00:04:36.101 13:33:38 -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.101 13:33:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.101 13:33:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:36.360 13:33:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:36.360 13:33:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:36.360 13:33:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:36.360 13:33:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.360 13:33:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.360 13:33:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:36.360 13:33:39 -- bdev/nbd_common.sh@41 -- # break 00:04:36.360 13:33:39 -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.360 13:33:39 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.360 13:33:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.360 13:33:39 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.618 13:33:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:36.618 13:33:39 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:36.618 13:33:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.618 13:33:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:36.618 13:33:39 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:36.618 13:33:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.618 13:33:39 -- bdev/nbd_common.sh@65 -- # true 00:04:36.618 13:33:39 -- bdev/nbd_common.sh@65 -- # count=0 00:04:36.618 13:33:39 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:36.618 13:33:39 -- bdev/nbd_common.sh@104 -- # count=0 00:04:36.618 13:33:39 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:36.618 13:33:39 -- bdev/nbd_common.sh@109 -- # return 0 00:04:36.618 13:33:39 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:36.878 13:33:39 -- event/event.sh@35 -- # sleep 3 00:04:37.138 [2024-04-18 13:33:39.920462] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.396 [2024-04-18 13:33:40.036905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.396 [2024-04-18 13:33:40.036907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.396 [2024-04-18 13:33:40.102065] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:37.397 [2024-04-18 13:33:40.102140] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:39.932 13:33:42 -- event/event.sh@38 -- # waitforlisten 2489546 /var/tmp/spdk-nbd.sock 00:04:39.932 13:33:42 -- common/autotest_common.sh@817 -- # '[' -z 2489546 ']' 00:04:39.932 13:33:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:39.933 13:33:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:39.933 13:33:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:39.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:39.933 13:33:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:39.933 13:33:42 -- common/autotest_common.sh@10 -- # set +x 00:04:40.192 13:33:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:40.192 13:33:42 -- common/autotest_common.sh@850 -- # return 0 00:04:40.192 13:33:42 -- event/event.sh@39 -- # killprocess 2489546 00:04:40.192 13:33:42 -- common/autotest_common.sh@936 -- # '[' -z 2489546 ']' 00:04:40.192 13:33:42 -- common/autotest_common.sh@940 -- # kill -0 2489546 00:04:40.192 13:33:42 -- common/autotest_common.sh@941 -- # uname 00:04:40.192 13:33:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:40.192 13:33:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2489546 00:04:40.192 13:33:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:40.192 13:33:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:40.192 13:33:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2489546' 00:04:40.192 killing process with pid 2489546 00:04:40.192 13:33:42 -- common/autotest_common.sh@955 -- # kill 2489546 00:04:40.192 13:33:42 -- common/autotest_common.sh@960 -- # wait 2489546 00:04:40.451 spdk_app_start is called in Round 0. 00:04:40.451 Shutdown signal received, stop current app iteration 00:04:40.451 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 reinitialization... 00:04:40.451 spdk_app_start is called in Round 1. 00:04:40.451 Shutdown signal received, stop current app iteration 00:04:40.451 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 reinitialization... 00:04:40.451 spdk_app_start is called in Round 2. 00:04:40.451 Shutdown signal received, stop current app iteration 00:04:40.451 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 reinitialization... 00:04:40.451 spdk_app_start is called in Round 3. 00:04:40.451 Shutdown signal received, stop current app iteration 00:04:40.451 13:33:43 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:40.451 13:33:43 -- event/event.sh@42 -- # return 0 00:04:40.451 00:04:40.451 real 0m17.626s 00:04:40.451 user 0m38.385s 00:04:40.451 sys 0m3.274s 00:04:40.451 13:33:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:40.451 13:33:43 -- common/autotest_common.sh@10 -- # set +x 00:04:40.451 ************************************ 00:04:40.451 END TEST app_repeat 00:04:40.451 ************************************ 00:04:40.451 13:33:43 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:40.451 13:33:43 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:40.451 13:33:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.451 13:33:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.451 13:33:43 -- common/autotest_common.sh@10 -- # set +x 00:04:40.709 ************************************ 00:04:40.709 START TEST cpu_locks 00:04:40.709 ************************************ 00:04:40.709 13:33:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:40.709 * Looking for test storage... 00:04:40.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:40.709 13:33:43 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:40.709 13:33:43 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:40.709 13:33:43 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:40.709 13:33:43 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:40.709 13:33:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.709 13:33:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.709 13:33:43 -- common/autotest_common.sh@10 -- # set +x 00:04:40.709 ************************************ 00:04:40.709 START TEST default_locks 00:04:40.709 ************************************ 00:04:40.709 13:33:43 -- common/autotest_common.sh@1111 -- # default_locks 00:04:40.709 13:33:43 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2491894 00:04:40.709 13:33:43 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.709 13:33:43 -- event/cpu_locks.sh@47 -- # waitforlisten 2491894 00:04:40.709 13:33:43 -- common/autotest_common.sh@817 -- # '[' -z 2491894 ']' 00:04:40.709 13:33:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.709 13:33:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:40.709 13:33:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.709 13:33:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:40.709 13:33:43 -- common/autotest_common.sh@10 -- # set +x 00:04:40.709 [2024-04-18 13:33:43.505496] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:40.709 [2024-04-18 13:33:43.505605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2491894 ] 00:04:40.968 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.968 [2024-04-18 13:33:43.568818] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.968 [2024-04-18 13:33:43.685579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.943 13:33:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:41.943 13:33:44 -- common/autotest_common.sh@850 -- # return 0 00:04:41.943 13:33:44 -- event/cpu_locks.sh@49 -- # locks_exist 2491894 00:04:41.943 13:33:44 -- event/cpu_locks.sh@22 -- # lslocks -p 2491894 00:04:41.943 13:33:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:42.202 lslocks: write error 00:04:42.202 13:33:44 -- event/cpu_locks.sh@50 -- # killprocess 2491894 00:04:42.202 13:33:44 -- common/autotest_common.sh@936 -- # '[' -z 2491894 ']' 00:04:42.202 13:33:44 -- common/autotest_common.sh@940 -- # kill -0 2491894 00:04:42.202 13:33:44 -- common/autotest_common.sh@941 -- # uname 00:04:42.202 13:33:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:42.202 13:33:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2491894 00:04:42.202 13:33:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:42.202 13:33:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:42.202 13:33:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2491894' 00:04:42.202 killing process with pid 2491894 00:04:42.202 13:33:44 -- common/autotest_common.sh@955 -- # kill 2491894 00:04:42.202 13:33:44 -- common/autotest_common.sh@960 -- # wait 2491894 00:04:42.461 13:33:45 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2491894 00:04:42.461 13:33:45 -- common/autotest_common.sh@638 -- # local es=0 00:04:42.461 13:33:45 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2491894 00:04:42.461 13:33:45 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:42.461 13:33:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:42.461 13:33:45 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:42.461 13:33:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:42.461 13:33:45 -- common/autotest_common.sh@641 -- # waitforlisten 2491894 00:04:42.461 13:33:45 -- common/autotest_common.sh@817 -- # '[' -z 2491894 ']' 00:04:42.461 13:33:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.461 13:33:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:42.461 13:33:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.461 13:33:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:42.461 13:33:45 -- common/autotest_common.sh@10 -- # set +x 00:04:42.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2491894) - No such process 00:04:42.461 ERROR: process (pid: 2491894) is no longer running 00:04:42.461 13:33:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:42.461 13:33:45 -- common/autotest_common.sh@850 -- # return 1 00:04:42.461 13:33:45 -- common/autotest_common.sh@641 -- # es=1 00:04:42.461 13:33:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:42.461 13:33:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:42.461 13:33:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:42.461 13:33:45 -- event/cpu_locks.sh@54 -- # no_locks 00:04:42.461 13:33:45 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:42.719 13:33:45 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:42.719 13:33:45 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:42.719 00:04:42.719 real 0m1.815s 00:04:42.719 user 0m1.929s 00:04:42.719 sys 0m0.583s 00:04:42.719 13:33:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:42.719 13:33:45 -- common/autotest_common.sh@10 -- # set +x 00:04:42.719 ************************************ 00:04:42.719 END TEST default_locks 00:04:42.719 ************************************ 00:04:42.719 13:33:45 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:42.719 13:33:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.719 13:33:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.719 13:33:45 -- common/autotest_common.sh@10 -- # set +x 00:04:42.719 ************************************ 00:04:42.719 START TEST default_locks_via_rpc 00:04:42.719 ************************************ 00:04:42.719 13:33:45 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:04:42.719 13:33:45 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2492195 00:04:42.719 13:33:45 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.719 13:33:45 -- event/cpu_locks.sh@63 -- # waitforlisten 2492195 00:04:42.719 13:33:45 -- common/autotest_common.sh@817 -- # '[' -z 2492195 ']' 00:04:42.719 13:33:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.719 13:33:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:42.720 13:33:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.720 13:33:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:42.720 13:33:45 -- common/autotest_common.sh@10 -- # set +x 00:04:42.720 [2024-04-18 13:33:45.444139] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:42.720 [2024-04-18 13:33:45.444269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492195 ] 00:04:42.720 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.720 [2024-04-18 13:33:45.505123] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.977 [2024-04-18 13:33:45.620036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.915 13:33:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:43.916 13:33:46 -- common/autotest_common.sh@850 -- # return 0 00:04:43.916 13:33:46 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:43.916 13:33:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:43.916 13:33:46 -- common/autotest_common.sh@10 -- # set +x 00:04:43.916 13:33:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:43.916 13:33:46 -- event/cpu_locks.sh@67 -- # no_locks 00:04:43.916 13:33:46 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:43.916 13:33:46 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:43.916 13:33:46 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:43.916 13:33:46 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:43.916 13:33:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:43.916 13:33:46 -- common/autotest_common.sh@10 -- # set +x 00:04:43.916 13:33:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:43.916 13:33:46 -- event/cpu_locks.sh@71 -- # locks_exist 2492195 00:04:43.916 13:33:46 -- event/cpu_locks.sh@22 -- # lslocks -p 2492195 00:04:43.916 13:33:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:43.916 13:33:46 -- event/cpu_locks.sh@73 -- # killprocess 2492195 00:04:43.916 13:33:46 -- common/autotest_common.sh@936 -- # '[' -z 2492195 ']' 00:04:43.916 13:33:46 -- common/autotest_common.sh@940 -- # kill -0 2492195 00:04:43.916 13:33:46 -- common/autotest_common.sh@941 -- # uname 00:04:43.916 13:33:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:43.916 13:33:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2492195 00:04:43.916 13:33:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:43.916 13:33:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:43.916 13:33:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2492195' 00:04:43.916 killing process with pid 2492195 00:04:43.916 13:33:46 -- common/autotest_common.sh@955 -- # kill 2492195 00:04:43.916 13:33:46 -- common/autotest_common.sh@960 -- # wait 2492195 00:04:44.484 00:04:44.484 real 0m1.773s 00:04:44.484 user 0m1.897s 00:04:44.484 sys 0m0.556s 00:04:44.484 13:33:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:44.484 13:33:47 -- common/autotest_common.sh@10 -- # set +x 00:04:44.484 ************************************ 00:04:44.484 END TEST default_locks_via_rpc 00:04:44.484 ************************************ 00:04:44.484 13:33:47 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:44.484 13:33:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.484 13:33:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.484 13:33:47 -- common/autotest_common.sh@10 -- # set +x 00:04:44.484 ************************************ 00:04:44.484 START TEST non_locking_app_on_locked_coremask 00:04:44.484 ************************************ 00:04:44.484 13:33:47 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:04:44.484 13:33:47 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2492496 00:04:44.484 13:33:47 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.484 13:33:47 -- event/cpu_locks.sh@81 -- # waitforlisten 2492496 /var/tmp/spdk.sock 00:04:44.484 13:33:47 -- common/autotest_common.sh@817 -- # '[' -z 2492496 ']' 00:04:44.484 13:33:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.484 13:33:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:44.484 13:33:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.484 13:33:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:44.484 13:33:47 -- common/autotest_common.sh@10 -- # set +x 00:04:44.743 [2024-04-18 13:33:47.335214] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:44.743 [2024-04-18 13:33:47.335311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492496 ] 00:04:44.743 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.743 [2024-04-18 13:33:47.392150] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.743 [2024-04-18 13:33:47.500648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.001 13:33:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:45.001 13:33:47 -- common/autotest_common.sh@850 -- # return 0 00:04:45.001 13:33:47 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2492501 00:04:45.001 13:33:47 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:45.001 13:33:47 -- event/cpu_locks.sh@85 -- # waitforlisten 2492501 /var/tmp/spdk2.sock 00:04:45.001 13:33:47 -- common/autotest_common.sh@817 -- # '[' -z 2492501 ']' 00:04:45.001 13:33:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.001 13:33:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:45.001 13:33:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.001 13:33:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:45.001 13:33:47 -- common/autotest_common.sh@10 -- # set +x 00:04:45.259 [2024-04-18 13:33:47.816840] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:45.259 [2024-04-18 13:33:47.816912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492501 ] 00:04:45.259 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.259 [2024-04-18 13:33:47.910803] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:45.259 [2024-04-18 13:33:47.910836] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.518 [2024-04-18 13:33:48.142432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.084 13:33:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:46.084 13:33:48 -- common/autotest_common.sh@850 -- # return 0 00:04:46.084 13:33:48 -- event/cpu_locks.sh@87 -- # locks_exist 2492496 00:04:46.084 13:33:48 -- event/cpu_locks.sh@22 -- # lslocks -p 2492496 00:04:46.084 13:33:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:46.342 lslocks: write error 00:04:46.342 13:33:49 -- event/cpu_locks.sh@89 -- # killprocess 2492496 00:04:46.342 13:33:49 -- common/autotest_common.sh@936 -- # '[' -z 2492496 ']' 00:04:46.342 13:33:49 -- common/autotest_common.sh@940 -- # kill -0 2492496 00:04:46.342 13:33:49 -- common/autotest_common.sh@941 -- # uname 00:04:46.342 13:33:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:46.342 13:33:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2492496 00:04:46.600 13:33:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:46.600 13:33:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:46.600 13:33:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2492496' 00:04:46.600 killing process with pid 2492496 00:04:46.600 13:33:49 -- common/autotest_common.sh@955 -- # kill 2492496 00:04:46.600 13:33:49 -- common/autotest_common.sh@960 -- # wait 2492496 00:04:47.538 13:33:50 -- event/cpu_locks.sh@90 -- # killprocess 2492501 00:04:47.538 13:33:50 -- common/autotest_common.sh@936 -- # '[' -z 2492501 ']' 00:04:47.538 13:33:50 -- common/autotest_common.sh@940 -- # kill -0 2492501 00:04:47.538 13:33:50 -- common/autotest_common.sh@941 -- # uname 00:04:47.538 13:33:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:47.538 13:33:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2492501 00:04:47.538 13:33:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:47.538 13:33:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:47.538 13:33:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2492501' 00:04:47.538 killing process with pid 2492501 00:04:47.538 13:33:50 -- common/autotest_common.sh@955 -- # kill 2492501 00:04:47.538 13:33:50 -- common/autotest_common.sh@960 -- # wait 2492501 00:04:47.797 00:04:47.797 real 0m3.299s 00:04:47.797 user 0m3.430s 00:04:47.797 sys 0m1.042s 00:04:47.797 13:33:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.797 13:33:50 -- common/autotest_common.sh@10 -- # set +x 00:04:47.797 ************************************ 00:04:47.797 END TEST non_locking_app_on_locked_coremask 00:04:47.797 ************************************ 00:04:48.057 13:33:50 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:48.057 13:33:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.057 13:33:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.057 13:33:50 -- common/autotest_common.sh@10 -- # set +x 00:04:48.057 ************************************ 00:04:48.057 START TEST locking_app_on_unlocked_coremask 00:04:48.057 ************************************ 00:04:48.057 13:33:50 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:04:48.057 13:33:50 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2492938 00:04:48.057 13:33:50 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:48.057 13:33:50 -- event/cpu_locks.sh@99 -- # waitforlisten 2492938 /var/tmp/spdk.sock 00:04:48.057 13:33:50 -- common/autotest_common.sh@817 -- # '[' -z 2492938 ']' 00:04:48.057 13:33:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.057 13:33:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:48.057 13:33:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.057 13:33:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:48.057 13:33:50 -- common/autotest_common.sh@10 -- # set +x 00:04:48.057 [2024-04-18 13:33:50.773503] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:48.057 [2024-04-18 13:33:50.773591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492938 ] 00:04:48.057 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.057 [2024-04-18 13:33:50.831340] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:48.057 [2024-04-18 13:33:50.831377] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.318 [2024-04-18 13:33:50.939960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.576 13:33:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:48.576 13:33:51 -- common/autotest_common.sh@850 -- # return 0 00:04:48.576 13:33:51 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2492943 00:04:48.576 13:33:51 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:48.576 13:33:51 -- event/cpu_locks.sh@103 -- # waitforlisten 2492943 /var/tmp/spdk2.sock 00:04:48.576 13:33:51 -- common/autotest_common.sh@817 -- # '[' -z 2492943 ']' 00:04:48.576 13:33:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.576 13:33:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:48.576 13:33:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.576 13:33:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:48.576 13:33:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.576 [2024-04-18 13:33:51.260565] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:48.577 [2024-04-18 13:33:51.260651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492943 ] 00:04:48.577 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.577 [2024-04-18 13:33:51.354062] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.836 [2024-04-18 13:33:51.585946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.405 13:33:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:49.405 13:33:52 -- common/autotest_common.sh@850 -- # return 0 00:04:49.405 13:33:52 -- event/cpu_locks.sh@105 -- # locks_exist 2492943 00:04:49.405 13:33:52 -- event/cpu_locks.sh@22 -- # lslocks -p 2492943 00:04:49.405 13:33:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:49.972 lslocks: write error 00:04:49.972 13:33:52 -- event/cpu_locks.sh@107 -- # killprocess 2492938 00:04:49.972 13:33:52 -- common/autotest_common.sh@936 -- # '[' -z 2492938 ']' 00:04:49.972 13:33:52 -- common/autotest_common.sh@940 -- # kill -0 2492938 00:04:49.972 13:33:52 -- common/autotest_common.sh@941 -- # uname 00:04:49.972 13:33:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:49.972 13:33:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2492938 00:04:49.972 13:33:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:49.972 13:33:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:49.972 13:33:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2492938' 00:04:49.972 killing process with pid 2492938 00:04:49.972 13:33:52 -- common/autotest_common.sh@955 -- # kill 2492938 00:04:49.972 13:33:52 -- common/autotest_common.sh@960 -- # wait 2492938 00:04:50.904 13:33:53 -- event/cpu_locks.sh@108 -- # killprocess 2492943 00:04:50.904 13:33:53 -- common/autotest_common.sh@936 -- # '[' -z 2492943 ']' 00:04:50.904 13:33:53 -- common/autotest_common.sh@940 -- # kill -0 2492943 00:04:50.904 13:33:53 -- common/autotest_common.sh@941 -- # uname 00:04:50.904 13:33:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:50.904 13:33:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2492943 00:04:50.904 13:33:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:50.904 13:33:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:50.904 13:33:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2492943' 00:04:50.904 killing process with pid 2492943 00:04:50.904 13:33:53 -- common/autotest_common.sh@955 -- # kill 2492943 00:04:50.904 13:33:53 -- common/autotest_common.sh@960 -- # wait 2492943 00:04:51.472 00:04:51.472 real 0m3.362s 00:04:51.472 user 0m3.467s 00:04:51.472 sys 0m1.050s 00:04:51.472 13:33:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.472 13:33:54 -- common/autotest_common.sh@10 -- # set +x 00:04:51.472 ************************************ 00:04:51.472 END TEST locking_app_on_unlocked_coremask 00:04:51.472 ************************************ 00:04:51.472 13:33:54 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:51.472 13:33:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.472 13:33:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.472 13:33:54 -- common/autotest_common.sh@10 -- # set +x 00:04:51.472 ************************************ 00:04:51.472 START TEST locking_app_on_locked_coremask 00:04:51.472 ************************************ 00:04:51.473 13:33:54 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:04:51.473 13:33:54 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2493378 00:04:51.473 13:33:54 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.473 13:33:54 -- event/cpu_locks.sh@116 -- # waitforlisten 2493378 /var/tmp/spdk.sock 00:04:51.473 13:33:54 -- common/autotest_common.sh@817 -- # '[' -z 2493378 ']' 00:04:51.473 13:33:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.473 13:33:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:51.473 13:33:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.473 13:33:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:51.473 13:33:54 -- common/autotest_common.sh@10 -- # set +x 00:04:51.473 [2024-04-18 13:33:54.261134] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:51.473 [2024-04-18 13:33:54.261256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493378 ] 00:04:51.731 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.731 [2024-04-18 13:33:54.324380] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.731 [2024-04-18 13:33:54.437721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.666 13:33:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:52.666 13:33:55 -- common/autotest_common.sh@850 -- # return 0 00:04:52.666 13:33:55 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2493514 00:04:52.666 13:33:55 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:52.666 13:33:55 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2493514 /var/tmp/spdk2.sock 00:04:52.666 13:33:55 -- common/autotest_common.sh@638 -- # local es=0 00:04:52.666 13:33:55 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2493514 /var/tmp/spdk2.sock 00:04:52.666 13:33:55 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:52.666 13:33:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:52.666 13:33:55 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:52.666 13:33:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:52.666 13:33:55 -- common/autotest_common.sh@641 -- # waitforlisten 2493514 /var/tmp/spdk2.sock 00:04:52.666 13:33:55 -- common/autotest_common.sh@817 -- # '[' -z 2493514 ']' 00:04:52.666 13:33:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.666 13:33:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:52.666 13:33:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.666 13:33:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:52.666 13:33:55 -- common/autotest_common.sh@10 -- # set +x 00:04:52.666 [2024-04-18 13:33:55.225704] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:52.666 [2024-04-18 13:33:55.225783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493514 ] 00:04:52.666 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.666 [2024-04-18 13:33:55.323017] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2493378 has claimed it. 00:04:52.666 [2024-04-18 13:33:55.323086] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:53.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2493514) - No such process 00:04:53.232 ERROR: process (pid: 2493514) is no longer running 00:04:53.232 13:33:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:53.232 13:33:55 -- common/autotest_common.sh@850 -- # return 1 00:04:53.232 13:33:55 -- common/autotest_common.sh@641 -- # es=1 00:04:53.232 13:33:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:53.232 13:33:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:53.232 13:33:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:53.232 13:33:55 -- event/cpu_locks.sh@122 -- # locks_exist 2493378 00:04:53.232 13:33:55 -- event/cpu_locks.sh@22 -- # lslocks -p 2493378 00:04:53.232 13:33:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.491 lslocks: write error 00:04:53.491 13:33:56 -- event/cpu_locks.sh@124 -- # killprocess 2493378 00:04:53.491 13:33:56 -- common/autotest_common.sh@936 -- # '[' -z 2493378 ']' 00:04:53.491 13:33:56 -- common/autotest_common.sh@940 -- # kill -0 2493378 00:04:53.491 13:33:56 -- common/autotest_common.sh@941 -- # uname 00:04:53.491 13:33:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:53.491 13:33:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2493378 00:04:53.491 13:33:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:53.491 13:33:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:53.491 13:33:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2493378' 00:04:53.491 killing process with pid 2493378 00:04:53.491 13:33:56 -- common/autotest_common.sh@955 -- # kill 2493378 00:04:53.491 13:33:56 -- common/autotest_common.sh@960 -- # wait 2493378 00:04:54.058 00:04:54.058 real 0m2.427s 00:04:54.058 user 0m2.740s 00:04:54.058 sys 0m0.630s 00:04:54.058 13:33:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.058 13:33:56 -- common/autotest_common.sh@10 -- # set +x 00:04:54.058 ************************************ 00:04:54.058 END TEST locking_app_on_locked_coremask 00:04:54.058 ************************************ 00:04:54.058 13:33:56 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:54.058 13:33:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.058 13:33:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.058 13:33:56 -- common/autotest_common.sh@10 -- # set +x 00:04:54.058 ************************************ 00:04:54.058 START TEST locking_overlapped_coremask 00:04:54.058 ************************************ 00:04:54.058 13:33:56 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:04:54.058 13:33:56 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2493690 00:04:54.058 13:33:56 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:54.058 13:33:56 -- event/cpu_locks.sh@133 -- # waitforlisten 2493690 /var/tmp/spdk.sock 00:04:54.058 13:33:56 -- common/autotest_common.sh@817 -- # '[' -z 2493690 ']' 00:04:54.058 13:33:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.058 13:33:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:54.058 13:33:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.058 13:33:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:54.058 13:33:56 -- common/autotest_common.sh@10 -- # set +x 00:04:54.058 [2024-04-18 13:33:56.815268] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:54.058 [2024-04-18 13:33:56.815347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493690 ] 00:04:54.058 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.317 [2024-04-18 13:33:56.874322] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:54.317 [2024-04-18 13:33:56.987978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.317 [2024-04-18 13:33:56.991197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.317 [2024-04-18 13:33:56.991209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.575 13:33:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:54.575 13:33:57 -- common/autotest_common.sh@850 -- # return 0 00:04:54.575 13:33:57 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2493820 00:04:54.575 13:33:57 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:54.575 13:33:57 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2493820 /var/tmp/spdk2.sock 00:04:54.575 13:33:57 -- common/autotest_common.sh@638 -- # local es=0 00:04:54.575 13:33:57 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2493820 /var/tmp/spdk2.sock 00:04:54.575 13:33:57 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:54.575 13:33:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:54.575 13:33:57 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:54.575 13:33:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:54.575 13:33:57 -- common/autotest_common.sh@641 -- # waitforlisten 2493820 /var/tmp/spdk2.sock 00:04:54.575 13:33:57 -- common/autotest_common.sh@817 -- # '[' -z 2493820 ']' 00:04:54.575 13:33:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:54.575 13:33:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:54.575 13:33:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:54.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:54.575 13:33:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:54.575 13:33:57 -- common/autotest_common.sh@10 -- # set +x 00:04:54.575 [2024-04-18 13:33:57.288149] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:54.576 [2024-04-18 13:33:57.288266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493820 ] 00:04:54.576 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.576 [2024-04-18 13:33:57.375316] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2493690 has claimed it. 00:04:54.576 [2024-04-18 13:33:57.375379] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:55.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2493820) - No such process 00:04:55.542 ERROR: process (pid: 2493820) is no longer running 00:04:55.542 13:33:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:55.542 13:33:57 -- common/autotest_common.sh@850 -- # return 1 00:04:55.542 13:33:57 -- common/autotest_common.sh@641 -- # es=1 00:04:55.542 13:33:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:55.542 13:33:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:55.542 13:33:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:55.542 13:33:57 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:55.542 13:33:57 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:55.542 13:33:57 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:55.542 13:33:57 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:55.542 13:33:57 -- event/cpu_locks.sh@141 -- # killprocess 2493690 00:04:55.542 13:33:57 -- common/autotest_common.sh@936 -- # '[' -z 2493690 ']' 00:04:55.542 13:33:57 -- common/autotest_common.sh@940 -- # kill -0 2493690 00:04:55.542 13:33:57 -- common/autotest_common.sh@941 -- # uname 00:04:55.542 13:33:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:55.542 13:33:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2493690 00:04:55.542 13:33:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:55.542 13:33:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:55.542 13:33:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2493690' 00:04:55.542 killing process with pid 2493690 00:04:55.542 13:33:58 -- common/autotest_common.sh@955 -- # kill 2493690 00:04:55.542 13:33:58 -- common/autotest_common.sh@960 -- # wait 2493690 00:04:55.801 00:04:55.801 real 0m1.677s 00:04:55.801 user 0m4.448s 00:04:55.801 sys 0m0.440s 00:04:55.801 13:33:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:55.801 13:33:58 -- common/autotest_common.sh@10 -- # set +x 00:04:55.801 ************************************ 00:04:55.801 END TEST locking_overlapped_coremask 00:04:55.801 ************************************ 00:04:55.801 13:33:58 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:55.801 13:33:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.801 13:33:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.801 13:33:58 -- common/autotest_common.sh@10 -- # set +x 00:04:55.801 ************************************ 00:04:55.801 START TEST locking_overlapped_coremask_via_rpc 00:04:55.801 ************************************ 00:04:55.801 13:33:58 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:04:55.801 13:33:58 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2493997 00:04:55.801 13:33:58 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:55.801 13:33:58 -- event/cpu_locks.sh@149 -- # waitforlisten 2493997 /var/tmp/spdk.sock 00:04:55.801 13:33:58 -- common/autotest_common.sh@817 -- # '[' -z 2493997 ']' 00:04:55.801 13:33:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.801 13:33:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:55.801 13:33:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.801 13:33:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:55.801 13:33:58 -- common/autotest_common.sh@10 -- # set +x 00:04:56.060 [2024-04-18 13:33:58.621081] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:56.060 [2024-04-18 13:33:58.621187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493997 ] 00:04:56.060 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.060 [2024-04-18 13:33:58.683186] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:56.060 [2024-04-18 13:33:58.683230] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:56.060 [2024-04-18 13:33:58.797211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.060 [2024-04-18 13:33:58.797257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.060 [2024-04-18 13:33:58.797260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.994 13:33:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:56.994 13:33:59 -- common/autotest_common.sh@850 -- # return 0 00:04:56.994 13:33:59 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2494137 00:04:56.994 13:33:59 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:56.994 13:33:59 -- event/cpu_locks.sh@153 -- # waitforlisten 2494137 /var/tmp/spdk2.sock 00:04:56.994 13:33:59 -- common/autotest_common.sh@817 -- # '[' -z 2494137 ']' 00:04:56.994 13:33:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.994 13:33:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:56.994 13:33:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.994 13:33:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:56.994 13:33:59 -- common/autotest_common.sh@10 -- # set +x 00:04:56.994 [2024-04-18 13:33:59.591495] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:56.994 [2024-04-18 13:33:59.591575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2494137 ] 00:04:56.994 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.994 [2024-04-18 13:33:59.680654] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:56.994 [2024-04-18 13:33:59.680691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:57.252 [2024-04-18 13:33:59.897788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:57.252 [2024-04-18 13:33:59.901276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:04:57.252 [2024-04-18 13:33:59.901279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.816 13:34:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:57.816 13:34:00 -- common/autotest_common.sh@850 -- # return 0 00:04:57.816 13:34:00 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:57.816 13:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.817 13:34:00 -- common/autotest_common.sh@10 -- # set +x 00:04:57.817 13:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.817 13:34:00 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:57.817 13:34:00 -- common/autotest_common.sh@638 -- # local es=0 00:04:57.817 13:34:00 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:57.817 13:34:00 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:57.817 13:34:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:57.817 13:34:00 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:57.817 13:34:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:57.817 13:34:00 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:57.817 13:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.817 13:34:00 -- common/autotest_common.sh@10 -- # set +x 00:04:57.817 [2024-04-18 13:34:00.553282] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2493997 has claimed it. 00:04:57.817 request: 00:04:57.817 { 00:04:57.817 "method": "framework_enable_cpumask_locks", 00:04:57.817 "req_id": 1 00:04:57.817 } 00:04:57.817 Got JSON-RPC error response 00:04:57.817 response: 00:04:57.817 { 00:04:57.817 "code": -32603, 00:04:57.817 "message": "Failed to claim CPU core: 2" 00:04:57.817 } 00:04:57.817 13:34:00 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:57.817 13:34:00 -- common/autotest_common.sh@641 -- # es=1 00:04:57.817 13:34:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:57.817 13:34:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:57.817 13:34:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:57.817 13:34:00 -- event/cpu_locks.sh@158 -- # waitforlisten 2493997 /var/tmp/spdk.sock 00:04:57.817 13:34:00 -- common/autotest_common.sh@817 -- # '[' -z 2493997 ']' 00:04:57.817 13:34:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.817 13:34:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:57.817 13:34:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.817 13:34:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:57.817 13:34:00 -- common/autotest_common.sh@10 -- # set +x 00:04:58.074 13:34:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:58.074 13:34:00 -- common/autotest_common.sh@850 -- # return 0 00:04:58.074 13:34:00 -- event/cpu_locks.sh@159 -- # waitforlisten 2494137 /var/tmp/spdk2.sock 00:04:58.074 13:34:00 -- common/autotest_common.sh@817 -- # '[' -z 2494137 ']' 00:04:58.074 13:34:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.074 13:34:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:58.074 13:34:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.074 13:34:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:58.074 13:34:00 -- common/autotest_common.sh@10 -- # set +x 00:04:58.332 13:34:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:58.332 13:34:01 -- common/autotest_common.sh@850 -- # return 0 00:04:58.332 13:34:01 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:58.332 13:34:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:58.332 13:34:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:58.333 13:34:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:58.333 00:04:58.333 real 0m2.483s 00:04:58.333 user 0m1.182s 00:04:58.333 sys 0m0.223s 00:04:58.333 13:34:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.333 13:34:01 -- common/autotest_common.sh@10 -- # set +x 00:04:58.333 ************************************ 00:04:58.333 END TEST locking_overlapped_coremask_via_rpc 00:04:58.333 ************************************ 00:04:58.333 13:34:01 -- event/cpu_locks.sh@174 -- # cleanup 00:04:58.333 13:34:01 -- event/cpu_locks.sh@15 -- # [[ -z 2493997 ]] 00:04:58.333 13:34:01 -- event/cpu_locks.sh@15 -- # killprocess 2493997 00:04:58.333 13:34:01 -- common/autotest_common.sh@936 -- # '[' -z 2493997 ']' 00:04:58.333 13:34:01 -- common/autotest_common.sh@940 -- # kill -0 2493997 00:04:58.333 13:34:01 -- common/autotest_common.sh@941 -- # uname 00:04:58.333 13:34:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:58.333 13:34:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2493997 00:04:58.333 13:34:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:58.333 13:34:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:58.333 13:34:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2493997' 00:04:58.333 killing process with pid 2493997 00:04:58.333 13:34:01 -- common/autotest_common.sh@955 -- # kill 2493997 00:04:58.333 13:34:01 -- common/autotest_common.sh@960 -- # wait 2493997 00:04:58.898 13:34:01 -- event/cpu_locks.sh@16 -- # [[ -z 2494137 ]] 00:04:58.899 13:34:01 -- event/cpu_locks.sh@16 -- # killprocess 2494137 00:04:58.899 13:34:01 -- common/autotest_common.sh@936 -- # '[' -z 2494137 ']' 00:04:58.899 13:34:01 -- common/autotest_common.sh@940 -- # kill -0 2494137 00:04:58.899 13:34:01 -- common/autotest_common.sh@941 -- # uname 00:04:58.899 13:34:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:58.899 13:34:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2494137 00:04:58.899 13:34:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:58.899 13:34:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:58.899 13:34:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2494137' 00:04:58.899 killing process with pid 2494137 00:04:58.899 13:34:01 -- common/autotest_common.sh@955 -- # kill 2494137 00:04:58.899 13:34:01 -- common/autotest_common.sh@960 -- # wait 2494137 00:04:59.465 13:34:02 -- event/cpu_locks.sh@18 -- # rm -f 00:04:59.465 13:34:02 -- event/cpu_locks.sh@1 -- # cleanup 00:04:59.465 13:34:02 -- event/cpu_locks.sh@15 -- # [[ -z 2493997 ]] 00:04:59.465 13:34:02 -- event/cpu_locks.sh@15 -- # killprocess 2493997 00:04:59.465 13:34:02 -- common/autotest_common.sh@936 -- # '[' -z 2493997 ']' 00:04:59.465 13:34:02 -- common/autotest_common.sh@940 -- # kill -0 2493997 00:04:59.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2493997) - No such process 00:04:59.465 13:34:02 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2493997 is not found' 00:04:59.465 Process with pid 2493997 is not found 00:04:59.465 13:34:02 -- event/cpu_locks.sh@16 -- # [[ -z 2494137 ]] 00:04:59.465 13:34:02 -- event/cpu_locks.sh@16 -- # killprocess 2494137 00:04:59.465 13:34:02 -- common/autotest_common.sh@936 -- # '[' -z 2494137 ']' 00:04:59.465 13:34:02 -- common/autotest_common.sh@940 -- # kill -0 2494137 00:04:59.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2494137) - No such process 00:04:59.465 13:34:02 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2494137 is not found' 00:04:59.465 Process with pid 2494137 is not found 00:04:59.465 13:34:02 -- event/cpu_locks.sh@18 -- # rm -f 00:04:59.465 00:04:59.465 real 0m18.763s 00:04:59.465 user 0m31.810s 00:04:59.465 sys 0m5.695s 00:04:59.465 13:34:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.465 13:34:02 -- common/autotest_common.sh@10 -- # set +x 00:04:59.465 ************************************ 00:04:59.465 END TEST cpu_locks 00:04:59.465 ************************************ 00:04:59.465 00:04:59.465 real 0m45.065s 00:04:59.465 user 1m23.080s 00:04:59.465 sys 0m10.059s 00:04:59.465 13:34:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.465 13:34:02 -- common/autotest_common.sh@10 -- # set +x 00:04:59.465 ************************************ 00:04:59.465 END TEST event 00:04:59.465 ************************************ 00:04:59.465 13:34:02 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:59.465 13:34:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.465 13:34:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.465 13:34:02 -- common/autotest_common.sh@10 -- # set +x 00:04:59.465 ************************************ 00:04:59.465 START TEST thread 00:04:59.465 ************************************ 00:04:59.465 13:34:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:59.465 * Looking for test storage... 00:04:59.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:59.465 13:34:02 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:59.465 13:34:02 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:04:59.465 13:34:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.465 13:34:02 -- common/autotest_common.sh@10 -- # set +x 00:04:59.723 ************************************ 00:04:59.723 START TEST thread_poller_perf 00:04:59.723 ************************************ 00:04:59.723 13:34:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:59.723 [2024-04-18 13:34:02.374604] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:04:59.723 [2024-04-18 13:34:02.374669] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2494513 ] 00:04:59.723 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.723 [2024-04-18 13:34:02.438970] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.980 [2024-04-18 13:34:02.555905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.980 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:00.913 ====================================== 00:05:00.913 busy:2714648284 (cyc) 00:05:00.913 total_run_count: 292000 00:05:00.913 tsc_hz: 2700000000 (cyc) 00:05:00.913 ====================================== 00:05:00.913 poller_cost: 9296 (cyc), 3442 (nsec) 00:05:00.913 00:05:00.913 real 0m1.330s 00:05:00.913 user 0m1.234s 00:05:00.913 sys 0m0.090s 00:05:00.913 13:34:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:00.913 13:34:03 -- common/autotest_common.sh@10 -- # set +x 00:05:00.913 ************************************ 00:05:00.913 END TEST thread_poller_perf 00:05:00.913 ************************************ 00:05:00.913 13:34:03 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:00.913 13:34:03 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:00.913 13:34:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.913 13:34:03 -- common/autotest_common.sh@10 -- # set +x 00:05:01.171 ************************************ 00:05:01.171 START TEST thread_poller_perf 00:05:01.171 ************************************ 00:05:01.171 13:34:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:01.171 [2024-04-18 13:34:03.822504] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:01.171 [2024-04-18 13:34:03.822573] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2494796 ] 00:05:01.171 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.171 [2024-04-18 13:34:03.883268] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.429 [2024-04-18 13:34:03.996042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.429 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:02.362 ====================================== 00:05:02.362 busy:2702400647 (cyc) 00:05:02.362 total_run_count: 3871000 00:05:02.362 tsc_hz: 2700000000 (cyc) 00:05:02.362 ====================================== 00:05:02.362 poller_cost: 698 (cyc), 258 (nsec) 00:05:02.362 00:05:02.362 real 0m1.308s 00:05:02.362 user 0m1.227s 00:05:02.362 sys 0m0.076s 00:05:02.362 13:34:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.362 13:34:05 -- common/autotest_common.sh@10 -- # set +x 00:05:02.362 ************************************ 00:05:02.362 END TEST thread_poller_perf 00:05:02.362 ************************************ 00:05:02.362 13:34:05 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:02.362 00:05:02.362 real 0m2.937s 00:05:02.362 user 0m2.581s 00:05:02.362 sys 0m0.333s 00:05:02.362 13:34:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.362 13:34:05 -- common/autotest_common.sh@10 -- # set +x 00:05:02.362 ************************************ 00:05:02.362 END TEST thread 00:05:02.362 ************************************ 00:05:02.362 13:34:05 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:02.362 13:34:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.362 13:34:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.362 13:34:05 -- common/autotest_common.sh@10 -- # set +x 00:05:02.621 ************************************ 00:05:02.621 START TEST accel 00:05:02.621 ************************************ 00:05:02.621 13:34:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:02.621 * Looking for test storage... 00:05:02.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:02.621 13:34:05 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:02.621 13:34:05 -- accel/accel.sh@82 -- # get_expected_opcs 00:05:02.621 13:34:05 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:02.621 13:34:05 -- accel/accel.sh@62 -- # spdk_tgt_pid=2495004 00:05:02.621 13:34:05 -- accel/accel.sh@63 -- # waitforlisten 2495004 00:05:02.621 13:34:05 -- common/autotest_common.sh@817 -- # '[' -z 2495004 ']' 00:05:02.621 13:34:05 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:02.621 13:34:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.621 13:34:05 -- accel/accel.sh@61 -- # build_accel_config 00:05:02.621 13:34:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:02.621 13:34:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:02.621 13:34:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.621 13:34:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:02.621 13:34:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:02.621 13:34:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.621 13:34:05 -- common/autotest_common.sh@10 -- # set +x 00:05:02.621 13:34:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.621 13:34:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:02.621 13:34:05 -- accel/accel.sh@40 -- # local IFS=, 00:05:02.621 13:34:05 -- accel/accel.sh@41 -- # jq -r . 00:05:02.621 [2024-04-18 13:34:05.366358] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:02.621 [2024-04-18 13:34:05.366470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495004 ] 00:05:02.621 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.621 [2024-04-18 13:34:05.427244] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.880 [2024-04-18 13:34:05.542937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.812 13:34:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:03.812 13:34:06 -- common/autotest_common.sh@850 -- # return 0 00:05:03.812 13:34:06 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:03.812 13:34:06 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:03.812 13:34:06 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:03.812 13:34:06 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:03.812 13:34:06 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:03.812 13:34:06 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:03.812 13:34:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.812 13:34:06 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:03.812 13:34:06 -- common/autotest_common.sh@10 -- # set +x 00:05:03.812 13:34:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.812 13:34:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:03.812 13:34:06 -- accel/accel.sh@72 -- # IFS== 00:05:03.812 13:34:06 -- accel/accel.sh@72 -- # read -r opc module 00:05:03.812 13:34:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:03.812 13:34:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:03.812 13:34:06 -- accel/accel.sh@72 -- # IFS== 00:05:03.812 13:34:06 -- accel/accel.sh@72 -- # read -r opc module 00:05:03.812 13:34:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:03.812 13:34:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:03.812 13:34:06 -- accel/accel.sh@72 -- # IFS== 00:05:03.812 13:34:06 -- accel/accel.sh@72 -- # read -r opc module 00:05:03.812 13:34:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:03.812 13:34:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:03.812 13:34:06 -- accel/accel.sh@72 -- # IFS== 00:05:03.812 13:34:06 -- accel/accel.sh@72 -- # read -r opc module 00:05:03.812 13:34:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:03.812 13:34:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:03.812 13:34:06 -- accel/accel.sh@72 -- # IFS== 00:05:03.812 13:34:06 -- accel/accel.sh@72 -- # read -r opc module 00:05:03.812 13:34:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:03.812 13:34:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:03.812 13:34:06 -- accel/accel.sh@72 -- # IFS== 00:05:03.812 13:34:06 -- accel/accel.sh@72 -- # read -r opc module 00:05:03.812 13:34:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:03.812 13:34:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:03.812 13:34:06 -- accel/accel.sh@72 -- # IFS== 00:05:03.812 13:34:06 -- accel/accel.sh@72 -- # read -r opc module 00:05:03.812 13:34:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:03.813 13:34:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:03.813 13:34:06 -- accel/accel.sh@72 -- # IFS== 00:05:03.813 13:34:06 -- accel/accel.sh@72 -- # read -r opc module 00:05:03.813 13:34:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:03.813 13:34:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:03.813 13:34:06 -- accel/accel.sh@72 -- # IFS== 00:05:03.813 13:34:06 -- accel/accel.sh@72 -- # read -r opc module 00:05:03.813 13:34:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:03.813 13:34:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:03.813 13:34:06 -- accel/accel.sh@72 -- # IFS== 00:05:03.813 13:34:06 -- accel/accel.sh@72 -- # read -r opc module 00:05:03.813 13:34:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:03.813 13:34:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:03.813 13:34:06 -- accel/accel.sh@72 -- # IFS== 00:05:03.813 13:34:06 -- accel/accel.sh@72 -- # read -r opc module 00:05:03.813 13:34:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:03.813 13:34:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:03.813 13:34:06 -- accel/accel.sh@72 -- # IFS== 00:05:03.813 13:34:06 -- accel/accel.sh@72 -- # read -r opc module 00:05:03.813 13:34:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:03.813 13:34:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:03.813 13:34:06 -- accel/accel.sh@72 -- # IFS== 00:05:03.813 13:34:06 -- accel/accel.sh@72 -- # read -r opc module 00:05:03.813 13:34:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:03.813 13:34:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:03.813 13:34:06 -- accel/accel.sh@72 -- # IFS== 00:05:03.813 13:34:06 -- accel/accel.sh@72 -- # read -r opc module 00:05:03.813 13:34:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:03.813 13:34:06 -- accel/accel.sh@75 -- # killprocess 2495004 00:05:03.813 13:34:06 -- common/autotest_common.sh@936 -- # '[' -z 2495004 ']' 00:05:03.813 13:34:06 -- common/autotest_common.sh@940 -- # kill -0 2495004 00:05:03.813 13:34:06 -- common/autotest_common.sh@941 -- # uname 00:05:03.813 13:34:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:03.813 13:34:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2495004 00:05:03.813 13:34:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:03.813 13:34:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:03.813 13:34:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2495004' 00:05:03.813 killing process with pid 2495004 00:05:03.813 13:34:06 -- common/autotest_common.sh@955 -- # kill 2495004 00:05:03.813 13:34:06 -- common/autotest_common.sh@960 -- # wait 2495004 00:05:04.071 13:34:06 -- accel/accel.sh@76 -- # trap - ERR 00:05:04.071 13:34:06 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:04.071 13:34:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:04.071 13:34:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.071 13:34:06 -- common/autotest_common.sh@10 -- # set +x 00:05:04.329 13:34:06 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:05:04.329 13:34:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:04.329 13:34:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:04.329 13:34:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:04.329 13:34:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:04.329 13:34:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.329 13:34:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.329 13:34:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:04.329 13:34:06 -- accel/accel.sh@40 -- # local IFS=, 00:05:04.329 13:34:06 -- accel/accel.sh@41 -- # jq -r . 00:05:04.329 13:34:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.329 13:34:06 -- common/autotest_common.sh@10 -- # set +x 00:05:04.329 13:34:06 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:04.329 13:34:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:04.329 13:34:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.329 13:34:06 -- common/autotest_common.sh@10 -- # set +x 00:05:04.329 ************************************ 00:05:04.329 START TEST accel_missing_filename 00:05:04.329 ************************************ 00:05:04.329 13:34:07 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:05:04.329 13:34:07 -- common/autotest_common.sh@638 -- # local es=0 00:05:04.329 13:34:07 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:04.329 13:34:07 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:04.329 13:34:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:04.329 13:34:07 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:04.329 13:34:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:04.329 13:34:07 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:05:04.329 13:34:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:04.329 13:34:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:04.329 13:34:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:04.329 13:34:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:04.329 13:34:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.329 13:34:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.329 13:34:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:04.329 13:34:07 -- accel/accel.sh@40 -- # local IFS=, 00:05:04.329 13:34:07 -- accel/accel.sh@41 -- # jq -r . 00:05:04.329 [2024-04-18 13:34:07.074197] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:04.329 [2024-04-18 13:34:07.074284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495204 ] 00:05:04.329 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.588 [2024-04-18 13:34:07.139405] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.588 [2024-04-18 13:34:07.254061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.588 [2024-04-18 13:34:07.314149] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:04.846 [2024-04-18 13:34:07.402444] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:04.846 A filename is required. 00:05:04.846 13:34:07 -- common/autotest_common.sh@641 -- # es=234 00:05:04.846 13:34:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:04.846 13:34:07 -- common/autotest_common.sh@650 -- # es=106 00:05:04.846 13:34:07 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:04.846 13:34:07 -- common/autotest_common.sh@658 -- # es=1 00:05:04.846 13:34:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:04.846 00:05:04.846 real 0m0.465s 00:05:04.846 user 0m0.359s 00:05:04.846 sys 0m0.139s 00:05:04.846 13:34:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.846 13:34:07 -- common/autotest_common.sh@10 -- # set +x 00:05:04.846 ************************************ 00:05:04.846 END TEST accel_missing_filename 00:05:04.846 ************************************ 00:05:04.846 13:34:07 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:04.846 13:34:07 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:04.846 13:34:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.847 13:34:07 -- common/autotest_common.sh@10 -- # set +x 00:05:04.847 ************************************ 00:05:04.847 START TEST accel_compress_verify 00:05:04.847 ************************************ 00:05:04.847 13:34:07 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:04.847 13:34:07 -- common/autotest_common.sh@638 -- # local es=0 00:05:04.847 13:34:07 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:04.847 13:34:07 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:04.847 13:34:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:04.847 13:34:07 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:04.847 13:34:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:04.847 13:34:07 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:04.847 13:34:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:04.847 13:34:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:04.847 13:34:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:04.847 13:34:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:04.847 13:34:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.847 13:34:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.847 13:34:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:04.847 13:34:07 -- accel/accel.sh@40 -- # local IFS=, 00:05:04.847 13:34:07 -- accel/accel.sh@41 -- # jq -r . 00:05:05.105 [2024-04-18 13:34:07.663355] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:05.105 [2024-04-18 13:34:07.663415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495351 ] 00:05:05.105 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.105 [2024-04-18 13:34:07.727291] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.105 [2024-04-18 13:34:07.842506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.105 [2024-04-18 13:34:07.906012] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:05.364 [2024-04-18 13:34:07.992347] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:05.364 00:05:05.364 Compression does not support the verify option, aborting. 00:05:05.364 13:34:08 -- common/autotest_common.sh@641 -- # es=161 00:05:05.364 13:34:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:05.364 13:34:08 -- common/autotest_common.sh@650 -- # es=33 00:05:05.364 13:34:08 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:05.365 13:34:08 -- common/autotest_common.sh@658 -- # es=1 00:05:05.365 13:34:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:05.365 00:05:05.365 real 0m0.476s 00:05:05.365 user 0m0.377s 00:05:05.365 sys 0m0.133s 00:05:05.365 13:34:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.365 13:34:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.365 ************************************ 00:05:05.365 END TEST accel_compress_verify 00:05:05.365 ************************************ 00:05:05.365 13:34:08 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:05.365 13:34:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:05.365 13:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.365 13:34:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.624 ************************************ 00:05:05.624 START TEST accel_wrong_workload 00:05:05.624 ************************************ 00:05:05.624 13:34:08 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:05:05.624 13:34:08 -- common/autotest_common.sh@638 -- # local es=0 00:05:05.624 13:34:08 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:05.624 13:34:08 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:05.624 13:34:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:05.624 13:34:08 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:05.624 13:34:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:05.624 13:34:08 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:05:05.624 13:34:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:05.624 13:34:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:05.624 13:34:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.624 13:34:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.624 13:34:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.624 13:34:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.624 13:34:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.624 13:34:08 -- accel/accel.sh@40 -- # local IFS=, 00:05:05.624 13:34:08 -- accel/accel.sh@41 -- # jq -r . 00:05:05.624 Unsupported workload type: foobar 00:05:05.624 [2024-04-18 13:34:08.259070] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:05.624 accel_perf options: 00:05:05.624 [-h help message] 00:05:05.624 [-q queue depth per core] 00:05:05.624 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:05.624 [-T number of threads per core 00:05:05.624 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:05.624 [-t time in seconds] 00:05:05.624 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:05.624 [ dif_verify, , dif_generate, dif_generate_copy 00:05:05.624 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:05.624 [-l for compress/decompress workloads, name of uncompressed input file 00:05:05.624 [-S for crc32c workload, use this seed value (default 0) 00:05:05.624 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:05.624 [-f for fill workload, use this BYTE value (default 255) 00:05:05.624 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:05.624 [-y verify result if this switch is on] 00:05:05.624 [-a tasks to allocate per core (default: same value as -q)] 00:05:05.624 Can be used to spread operations across a wider range of memory. 00:05:05.624 13:34:08 -- common/autotest_common.sh@641 -- # es=1 00:05:05.624 13:34:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:05.624 13:34:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:05.624 13:34:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:05.624 00:05:05.624 real 0m0.022s 00:05:05.624 user 0m0.014s 00:05:05.624 sys 0m0.007s 00:05:05.624 13:34:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.624 13:34:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.624 ************************************ 00:05:05.624 END TEST accel_wrong_workload 00:05:05.624 ************************************ 00:05:05.624 Error: writing output failed: Broken pipe 00:05:05.624 13:34:08 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:05.624 13:34:08 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:05.624 13:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.624 13:34:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.624 ************************************ 00:05:05.624 START TEST accel_negative_buffers 00:05:05.624 ************************************ 00:05:05.624 13:34:08 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:05.624 13:34:08 -- common/autotest_common.sh@638 -- # local es=0 00:05:05.624 13:34:08 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:05.624 13:34:08 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:05.624 13:34:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:05.624 13:34:08 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:05.624 13:34:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:05.624 13:34:08 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:05:05.624 13:34:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:05.624 13:34:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:05.624 13:34:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.624 13:34:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.624 13:34:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.624 13:34:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.624 13:34:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.624 13:34:08 -- accel/accel.sh@40 -- # local IFS=, 00:05:05.624 13:34:08 -- accel/accel.sh@41 -- # jq -r . 00:05:05.624 -x option must be non-negative. 00:05:05.624 [2024-04-18 13:34:08.394624] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:05.624 accel_perf options: 00:05:05.624 [-h help message] 00:05:05.624 [-q queue depth per core] 00:05:05.624 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:05.624 [-T number of threads per core 00:05:05.624 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:05.624 [-t time in seconds] 00:05:05.624 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:05.624 [ dif_verify, , dif_generate, dif_generate_copy 00:05:05.624 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:05.624 [-l for compress/decompress workloads, name of uncompressed input file 00:05:05.624 [-S for crc32c workload, use this seed value (default 0) 00:05:05.624 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:05.624 [-f for fill workload, use this BYTE value (default 255) 00:05:05.624 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:05.624 [-y verify result if this switch is on] 00:05:05.624 [-a tasks to allocate per core (default: same value as -q)] 00:05:05.624 Can be used to spread operations across a wider range of memory. 00:05:05.624 13:34:08 -- common/autotest_common.sh@641 -- # es=1 00:05:05.624 13:34:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:05.624 13:34:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:05.624 13:34:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:05.624 00:05:05.624 real 0m0.020s 00:05:05.624 user 0m0.011s 00:05:05.624 sys 0m0.009s 00:05:05.624 13:34:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.624 13:34:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.624 ************************************ 00:05:05.624 END TEST accel_negative_buffers 00:05:05.624 ************************************ 00:05:05.624 Error: writing output failed: Broken pipe 00:05:05.624 13:34:08 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:05.624 13:34:08 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:05.624 13:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.624 13:34:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.883 ************************************ 00:05:05.883 START TEST accel_crc32c 00:05:05.883 ************************************ 00:05:05.883 13:34:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:05.883 13:34:08 -- accel/accel.sh@16 -- # local accel_opc 00:05:05.883 13:34:08 -- accel/accel.sh@17 -- # local accel_module 00:05:05.883 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:05.883 13:34:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:05.883 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:05.883 13:34:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:05.883 13:34:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:05.883 13:34:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.883 13:34:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.883 13:34:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.883 13:34:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.883 13:34:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.883 13:34:08 -- accel/accel.sh@40 -- # local IFS=, 00:05:05.883 13:34:08 -- accel/accel.sh@41 -- # jq -r . 00:05:05.883 [2024-04-18 13:34:08.526610] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:05.883 [2024-04-18 13:34:08.526675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495559 ] 00:05:05.883 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.883 [2024-04-18 13:34:08.590220] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.142 [2024-04-18 13:34:08.702626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val= 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val= 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val=0x1 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val= 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val= 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val=crc32c 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val=32 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val= 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val=software 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@22 -- # accel_module=software 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val=32 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val=32 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val=1 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val=Yes 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val= 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:06.142 13:34:08 -- accel/accel.sh@20 -- # val= 00:05:06.142 13:34:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # IFS=: 00:05:06.142 13:34:08 -- accel/accel.sh@19 -- # read -r var val 00:05:07.517 13:34:09 -- accel/accel.sh@20 -- # val= 00:05:07.517 13:34:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.517 13:34:09 -- accel/accel.sh@19 -- # IFS=: 00:05:07.517 13:34:09 -- accel/accel.sh@19 -- # read -r var val 00:05:07.517 13:34:09 -- accel/accel.sh@20 -- # val= 00:05:07.517 13:34:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.517 13:34:09 -- accel/accel.sh@19 -- # IFS=: 00:05:07.517 13:34:09 -- accel/accel.sh@19 -- # read -r var val 00:05:07.517 13:34:09 -- accel/accel.sh@20 -- # val= 00:05:07.517 13:34:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.517 13:34:09 -- accel/accel.sh@19 -- # IFS=: 00:05:07.517 13:34:09 -- accel/accel.sh@19 -- # read -r var val 00:05:07.517 13:34:09 -- accel/accel.sh@20 -- # val= 00:05:07.517 13:34:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.517 13:34:09 -- accel/accel.sh@19 -- # IFS=: 00:05:07.517 13:34:09 -- accel/accel.sh@19 -- # read -r var val 00:05:07.518 13:34:09 -- accel/accel.sh@20 -- # val= 00:05:07.518 13:34:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.518 13:34:09 -- accel/accel.sh@19 -- # IFS=: 00:05:07.518 13:34:09 -- accel/accel.sh@19 -- # read -r var val 00:05:07.518 13:34:09 -- accel/accel.sh@20 -- # val= 00:05:07.518 13:34:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.518 13:34:09 -- accel/accel.sh@19 -- # IFS=: 00:05:07.518 13:34:09 -- accel/accel.sh@19 -- # read -r var val 00:05:07.518 13:34:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:07.518 13:34:09 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:07.518 13:34:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:07.518 00:05:07.518 real 0m1.467s 00:05:07.518 user 0m1.319s 00:05:07.518 sys 0m0.150s 00:05:07.518 13:34:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.518 13:34:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.518 ************************************ 00:05:07.518 END TEST accel_crc32c 00:05:07.518 ************************************ 00:05:07.518 13:34:09 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:07.518 13:34:09 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:07.518 13:34:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.518 13:34:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.518 ************************************ 00:05:07.518 START TEST accel_crc32c_C2 00:05:07.518 ************************************ 00:05:07.518 13:34:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:07.518 13:34:10 -- accel/accel.sh@16 -- # local accel_opc 00:05:07.518 13:34:10 -- accel/accel.sh@17 -- # local accel_module 00:05:07.518 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.518 13:34:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:07.518 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.518 13:34:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:07.518 13:34:10 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.518 13:34:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.518 13:34:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.518 13:34:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.518 13:34:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.518 13:34:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.518 13:34:10 -- accel/accel.sh@40 -- # local IFS=, 00:05:07.518 13:34:10 -- accel/accel.sh@41 -- # jq -r . 00:05:07.518 [2024-04-18 13:34:10.125477] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:07.518 [2024-04-18 13:34:10.125540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495721 ] 00:05:07.518 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.518 [2024-04-18 13:34:10.188378] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.518 [2024-04-18 13:34:10.307630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val= 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val= 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val=0x1 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val= 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val= 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val=crc32c 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val=0 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val= 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val=software 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@22 -- # accel_module=software 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val=32 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val=32 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val=1 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val=Yes 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val= 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:07.777 13:34:10 -- accel/accel.sh@20 -- # val= 00:05:07.777 13:34:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # IFS=: 00:05:07.777 13:34:10 -- accel/accel.sh@19 -- # read -r var val 00:05:09.188 13:34:11 -- accel/accel.sh@20 -- # val= 00:05:09.188 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.188 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.188 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.188 13:34:11 -- accel/accel.sh@20 -- # val= 00:05:09.188 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.188 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.188 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.188 13:34:11 -- accel/accel.sh@20 -- # val= 00:05:09.188 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.188 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.188 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.188 13:34:11 -- accel/accel.sh@20 -- # val= 00:05:09.188 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.188 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.188 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.188 13:34:11 -- accel/accel.sh@20 -- # val= 00:05:09.188 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.188 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.188 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.188 13:34:11 -- accel/accel.sh@20 -- # val= 00:05:09.188 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.188 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.188 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.188 13:34:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:09.188 13:34:11 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:09.188 13:34:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.188 00:05:09.188 real 0m1.486s 00:05:09.188 user 0m1.343s 00:05:09.188 sys 0m0.144s 00:05:09.188 13:34:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:09.188 13:34:11 -- common/autotest_common.sh@10 -- # set +x 00:05:09.188 ************************************ 00:05:09.188 END TEST accel_crc32c_C2 00:05:09.188 ************************************ 00:05:09.188 13:34:11 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:09.189 13:34:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:09.189 13:34:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.189 13:34:11 -- common/autotest_common.sh@10 -- # set +x 00:05:09.189 ************************************ 00:05:09.189 START TEST accel_copy 00:05:09.189 ************************************ 00:05:09.189 13:34:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:05:09.189 13:34:11 -- accel/accel.sh@16 -- # local accel_opc 00:05:09.189 13:34:11 -- accel/accel.sh@17 -- # local accel_module 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.189 13:34:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:09.189 13:34:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:09.189 13:34:11 -- accel/accel.sh@12 -- # build_accel_config 00:05:09.189 13:34:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.189 13:34:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.189 13:34:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.189 13:34:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.189 13:34:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.189 13:34:11 -- accel/accel.sh@40 -- # local IFS=, 00:05:09.189 13:34:11 -- accel/accel.sh@41 -- # jq -r . 00:05:09.189 [2024-04-18 13:34:11.730561] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:09.189 [2024-04-18 13:34:11.730625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496003 ] 00:05:09.189 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.189 [2024-04-18 13:34:11.795547] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.189 [2024-04-18 13:34:11.910169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.189 13:34:11 -- accel/accel.sh@20 -- # val= 00:05:09.189 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.189 13:34:11 -- accel/accel.sh@20 -- # val= 00:05:09.189 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.189 13:34:11 -- accel/accel.sh@20 -- # val=0x1 00:05:09.189 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.189 13:34:11 -- accel/accel.sh@20 -- # val= 00:05:09.189 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.189 13:34:11 -- accel/accel.sh@20 -- # val= 00:05:09.189 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.189 13:34:11 -- accel/accel.sh@20 -- # val=copy 00:05:09.189 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.189 13:34:11 -- accel/accel.sh@23 -- # accel_opc=copy 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.189 13:34:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.189 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.189 13:34:11 -- accel/accel.sh@20 -- # val= 00:05:09.189 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.189 13:34:11 -- accel/accel.sh@20 -- # val=software 00:05:09.189 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.189 13:34:11 -- accel/accel.sh@22 -- # accel_module=software 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.189 13:34:11 -- accel/accel.sh@20 -- # val=32 00:05:09.189 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.189 13:34:11 -- accel/accel.sh@20 -- # val=32 00:05:09.189 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.189 13:34:11 -- accel/accel.sh@20 -- # val=1 00:05:09.189 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.189 13:34:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.189 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.189 13:34:11 -- accel/accel.sh@20 -- # val=Yes 00:05:09.189 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.189 13:34:11 -- accel/accel.sh@20 -- # val= 00:05:09.189 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:09.189 13:34:11 -- accel/accel.sh@20 -- # val= 00:05:09.189 13:34:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # IFS=: 00:05:09.189 13:34:11 -- accel/accel.sh@19 -- # read -r var val 00:05:10.563 13:34:13 -- accel/accel.sh@20 -- # val= 00:05:10.563 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.563 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.563 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.563 13:34:13 -- accel/accel.sh@20 -- # val= 00:05:10.563 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.563 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.563 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.563 13:34:13 -- accel/accel.sh@20 -- # val= 00:05:10.563 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.563 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.563 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.563 13:34:13 -- accel/accel.sh@20 -- # val= 00:05:10.563 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.563 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.563 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.563 13:34:13 -- accel/accel.sh@20 -- # val= 00:05:10.563 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.563 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.563 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.563 13:34:13 -- accel/accel.sh@20 -- # val= 00:05:10.563 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.563 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.563 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.563 13:34:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.563 13:34:13 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:10.563 13:34:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.563 00:05:10.563 real 0m1.476s 00:05:10.563 user 0m1.331s 00:05:10.563 sys 0m0.146s 00:05:10.563 13:34:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.563 13:34:13 -- common/autotest_common.sh@10 -- # set +x 00:05:10.563 ************************************ 00:05:10.563 END TEST accel_copy 00:05:10.563 ************************************ 00:05:10.563 13:34:13 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:10.563 13:34:13 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:10.563 13:34:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.563 13:34:13 -- common/autotest_common.sh@10 -- # set +x 00:05:10.563 ************************************ 00:05:10.563 START TEST accel_fill 00:05:10.563 ************************************ 00:05:10.563 13:34:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:10.563 13:34:13 -- accel/accel.sh@16 -- # local accel_opc 00:05:10.563 13:34:13 -- accel/accel.sh@17 -- # local accel_module 00:05:10.563 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.563 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.564 13:34:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:10.564 13:34:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:10.564 13:34:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:10.564 13:34:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.564 13:34:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.564 13:34:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.564 13:34:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.564 13:34:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.564 13:34:13 -- accel/accel.sh@40 -- # local IFS=, 00:05:10.564 13:34:13 -- accel/accel.sh@41 -- # jq -r . 00:05:10.564 [2024-04-18 13:34:13.333245] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:10.564 [2024-04-18 13:34:13.333306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496171 ] 00:05:10.564 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.823 [2024-04-18 13:34:13.397043] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.823 [2024-04-18 13:34:13.515241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.823 13:34:13 -- accel/accel.sh@20 -- # val= 00:05:10.823 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.823 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.823 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.823 13:34:13 -- accel/accel.sh@20 -- # val= 00:05:10.823 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.823 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.823 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.823 13:34:13 -- accel/accel.sh@20 -- # val=0x1 00:05:10.823 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.823 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.823 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.823 13:34:13 -- accel/accel.sh@20 -- # val= 00:05:10.823 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.823 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.823 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.823 13:34:13 -- accel/accel.sh@20 -- # val= 00:05:10.823 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.823 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.823 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.823 13:34:13 -- accel/accel.sh@20 -- # val=fill 00:05:10.823 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.823 13:34:13 -- accel/accel.sh@23 -- # accel_opc=fill 00:05:10.823 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.823 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.823 13:34:13 -- accel/accel.sh@20 -- # val=0x80 00:05:10.823 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.823 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.823 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.823 13:34:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:10.823 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.823 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.823 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.823 13:34:13 -- accel/accel.sh@20 -- # val= 00:05:10.823 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.824 13:34:13 -- accel/accel.sh@20 -- # val=software 00:05:10.824 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.824 13:34:13 -- accel/accel.sh@22 -- # accel_module=software 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.824 13:34:13 -- accel/accel.sh@20 -- # val=64 00:05:10.824 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.824 13:34:13 -- accel/accel.sh@20 -- # val=64 00:05:10.824 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.824 13:34:13 -- accel/accel.sh@20 -- # val=1 00:05:10.824 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.824 13:34:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:10.824 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.824 13:34:13 -- accel/accel.sh@20 -- # val=Yes 00:05:10.824 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.824 13:34:13 -- accel/accel.sh@20 -- # val= 00:05:10.824 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:10.824 13:34:13 -- accel/accel.sh@20 -- # val= 00:05:10.824 13:34:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # IFS=: 00:05:10.824 13:34:13 -- accel/accel.sh@19 -- # read -r var val 00:05:12.198 13:34:14 -- accel/accel.sh@20 -- # val= 00:05:12.198 13:34:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.198 13:34:14 -- accel/accel.sh@19 -- # IFS=: 00:05:12.198 13:34:14 -- accel/accel.sh@19 -- # read -r var val 00:05:12.198 13:34:14 -- accel/accel.sh@20 -- # val= 00:05:12.198 13:34:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.198 13:34:14 -- accel/accel.sh@19 -- # IFS=: 00:05:12.198 13:34:14 -- accel/accel.sh@19 -- # read -r var val 00:05:12.198 13:34:14 -- accel/accel.sh@20 -- # val= 00:05:12.198 13:34:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.198 13:34:14 -- accel/accel.sh@19 -- # IFS=: 00:05:12.198 13:34:14 -- accel/accel.sh@19 -- # read -r var val 00:05:12.198 13:34:14 -- accel/accel.sh@20 -- # val= 00:05:12.198 13:34:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.198 13:34:14 -- accel/accel.sh@19 -- # IFS=: 00:05:12.198 13:34:14 -- accel/accel.sh@19 -- # read -r var val 00:05:12.198 13:34:14 -- accel/accel.sh@20 -- # val= 00:05:12.198 13:34:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.198 13:34:14 -- accel/accel.sh@19 -- # IFS=: 00:05:12.198 13:34:14 -- accel/accel.sh@19 -- # read -r var val 00:05:12.198 13:34:14 -- accel/accel.sh@20 -- # val= 00:05:12.198 13:34:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.198 13:34:14 -- accel/accel.sh@19 -- # IFS=: 00:05:12.198 13:34:14 -- accel/accel.sh@19 -- # read -r var val 00:05:12.198 13:34:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.198 13:34:14 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:12.198 13:34:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.198 00:05:12.198 real 0m1.488s 00:05:12.198 user 0m1.339s 00:05:12.198 sys 0m0.150s 00:05:12.198 13:34:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:12.198 13:34:14 -- common/autotest_common.sh@10 -- # set +x 00:05:12.198 ************************************ 00:05:12.198 END TEST accel_fill 00:05:12.198 ************************************ 00:05:12.198 13:34:14 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:12.198 13:34:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:12.198 13:34:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.198 13:34:14 -- common/autotest_common.sh@10 -- # set +x 00:05:12.198 ************************************ 00:05:12.198 START TEST accel_copy_crc32c 00:05:12.198 ************************************ 00:05:12.198 13:34:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:05:12.198 13:34:14 -- accel/accel.sh@16 -- # local accel_opc 00:05:12.199 13:34:14 -- accel/accel.sh@17 -- # local accel_module 00:05:12.199 13:34:14 -- accel/accel.sh@19 -- # IFS=: 00:05:12.199 13:34:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:12.199 13:34:14 -- accel/accel.sh@19 -- # read -r var val 00:05:12.199 13:34:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:12.199 13:34:14 -- accel/accel.sh@12 -- # build_accel_config 00:05:12.199 13:34:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.199 13:34:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.199 13:34:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.199 13:34:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.199 13:34:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.199 13:34:14 -- accel/accel.sh@40 -- # local IFS=, 00:05:12.199 13:34:14 -- accel/accel.sh@41 -- # jq -r . 00:05:12.199 [2024-04-18 13:34:14.944636] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:12.199 [2024-04-18 13:34:14.944702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496338 ] 00:05:12.199 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.457 [2024-04-18 13:34:15.009207] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.457 [2024-04-18 13:34:15.124578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.457 13:34:15 -- accel/accel.sh@20 -- # val= 00:05:12.457 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.457 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.457 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.457 13:34:15 -- accel/accel.sh@20 -- # val= 00:05:12.457 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.457 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.457 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.457 13:34:15 -- accel/accel.sh@20 -- # val=0x1 00:05:12.457 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.457 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.457 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.457 13:34:15 -- accel/accel.sh@20 -- # val= 00:05:12.458 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.458 13:34:15 -- accel/accel.sh@20 -- # val= 00:05:12.458 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.458 13:34:15 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:12.458 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.458 13:34:15 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.458 13:34:15 -- accel/accel.sh@20 -- # val=0 00:05:12.458 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.458 13:34:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.458 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.458 13:34:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.458 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.458 13:34:15 -- accel/accel.sh@20 -- # val= 00:05:12.458 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.458 13:34:15 -- accel/accel.sh@20 -- # val=software 00:05:12.458 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.458 13:34:15 -- accel/accel.sh@22 -- # accel_module=software 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.458 13:34:15 -- accel/accel.sh@20 -- # val=32 00:05:12.458 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.458 13:34:15 -- accel/accel.sh@20 -- # val=32 00:05:12.458 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.458 13:34:15 -- accel/accel.sh@20 -- # val=1 00:05:12.458 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.458 13:34:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.458 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.458 13:34:15 -- accel/accel.sh@20 -- # val=Yes 00:05:12.458 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.458 13:34:15 -- accel/accel.sh@20 -- # val= 00:05:12.458 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:12.458 13:34:15 -- accel/accel.sh@20 -- # val= 00:05:12.458 13:34:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # IFS=: 00:05:12.458 13:34:15 -- accel/accel.sh@19 -- # read -r var val 00:05:13.832 13:34:16 -- accel/accel.sh@20 -- # val= 00:05:13.832 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.832 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:13.832 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:13.832 13:34:16 -- accel/accel.sh@20 -- # val= 00:05:13.832 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.832 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:13.832 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:13.832 13:34:16 -- accel/accel.sh@20 -- # val= 00:05:13.832 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.832 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:13.832 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:13.832 13:34:16 -- accel/accel.sh@20 -- # val= 00:05:13.832 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.832 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:13.832 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:13.832 13:34:16 -- accel/accel.sh@20 -- # val= 00:05:13.832 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.832 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:13.832 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:13.832 13:34:16 -- accel/accel.sh@20 -- # val= 00:05:13.832 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.832 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:13.832 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:13.832 13:34:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:13.832 13:34:16 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:13.832 13:34:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:13.832 00:05:13.832 real 0m1.469s 00:05:13.832 user 0m1.325s 00:05:13.832 sys 0m0.146s 00:05:13.832 13:34:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.832 13:34:16 -- common/autotest_common.sh@10 -- # set +x 00:05:13.832 ************************************ 00:05:13.832 END TEST accel_copy_crc32c 00:05:13.832 ************************************ 00:05:13.832 13:34:16 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:13.832 13:34:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:13.832 13:34:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.832 13:34:16 -- common/autotest_common.sh@10 -- # set +x 00:05:13.832 ************************************ 00:05:13.832 START TEST accel_copy_crc32c_C2 00:05:13.832 ************************************ 00:05:13.832 13:34:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:13.832 13:34:16 -- accel/accel.sh@16 -- # local accel_opc 00:05:13.832 13:34:16 -- accel/accel.sh@17 -- # local accel_module 00:05:13.832 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:13.832 13:34:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:13.832 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:13.832 13:34:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:13.832 13:34:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.832 13:34:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.832 13:34:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.832 13:34:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.832 13:34:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.832 13:34:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.832 13:34:16 -- accel/accel.sh@40 -- # local IFS=, 00:05:13.832 13:34:16 -- accel/accel.sh@41 -- # jq -r . 00:05:13.832 [2024-04-18 13:34:16.538316] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:13.832 [2024-04-18 13:34:16.538374] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496617 ] 00:05:13.832 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.832 [2024-04-18 13:34:16.603055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.091 [2024-04-18 13:34:16.716463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.091 13:34:16 -- accel/accel.sh@20 -- # val= 00:05:14.091 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.091 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.091 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.091 13:34:16 -- accel/accel.sh@20 -- # val= 00:05:14.091 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.091 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.091 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.091 13:34:16 -- accel/accel.sh@20 -- # val=0x1 00:05:14.091 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.091 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.091 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.091 13:34:16 -- accel/accel.sh@20 -- # val= 00:05:14.091 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.091 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.091 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.091 13:34:16 -- accel/accel.sh@20 -- # val= 00:05:14.091 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.091 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.091 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.091 13:34:16 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:14.091 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.091 13:34:16 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:14.091 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.091 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.091 13:34:16 -- accel/accel.sh@20 -- # val=0 00:05:14.091 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.091 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.091 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.091 13:34:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.091 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.092 13:34:16 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:14.092 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.092 13:34:16 -- accel/accel.sh@20 -- # val= 00:05:14.092 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.092 13:34:16 -- accel/accel.sh@20 -- # val=software 00:05:14.092 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.092 13:34:16 -- accel/accel.sh@22 -- # accel_module=software 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.092 13:34:16 -- accel/accel.sh@20 -- # val=32 00:05:14.092 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.092 13:34:16 -- accel/accel.sh@20 -- # val=32 00:05:14.092 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.092 13:34:16 -- accel/accel.sh@20 -- # val=1 00:05:14.092 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.092 13:34:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.092 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.092 13:34:16 -- accel/accel.sh@20 -- # val=Yes 00:05:14.092 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.092 13:34:16 -- accel/accel.sh@20 -- # val= 00:05:14.092 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:14.092 13:34:16 -- accel/accel.sh@20 -- # val= 00:05:14.092 13:34:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # IFS=: 00:05:14.092 13:34:16 -- accel/accel.sh@19 -- # read -r var val 00:05:15.466 13:34:17 -- accel/accel.sh@20 -- # val= 00:05:15.466 13:34:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.466 13:34:17 -- accel/accel.sh@19 -- # IFS=: 00:05:15.466 13:34:17 -- accel/accel.sh@19 -- # read -r var val 00:05:15.466 13:34:17 -- accel/accel.sh@20 -- # val= 00:05:15.466 13:34:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.466 13:34:17 -- accel/accel.sh@19 -- # IFS=: 00:05:15.466 13:34:17 -- accel/accel.sh@19 -- # read -r var val 00:05:15.466 13:34:17 -- accel/accel.sh@20 -- # val= 00:05:15.466 13:34:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.466 13:34:17 -- accel/accel.sh@19 -- # IFS=: 00:05:15.466 13:34:17 -- accel/accel.sh@19 -- # read -r var val 00:05:15.466 13:34:17 -- accel/accel.sh@20 -- # val= 00:05:15.466 13:34:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.466 13:34:17 -- accel/accel.sh@19 -- # IFS=: 00:05:15.466 13:34:17 -- accel/accel.sh@19 -- # read -r var val 00:05:15.466 13:34:17 -- accel/accel.sh@20 -- # val= 00:05:15.466 13:34:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.466 13:34:17 -- accel/accel.sh@19 -- # IFS=: 00:05:15.466 13:34:17 -- accel/accel.sh@19 -- # read -r var val 00:05:15.466 13:34:17 -- accel/accel.sh@20 -- # val= 00:05:15.466 13:34:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.466 13:34:17 -- accel/accel.sh@19 -- # IFS=: 00:05:15.466 13:34:17 -- accel/accel.sh@19 -- # read -r var val 00:05:15.466 13:34:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.466 13:34:17 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:15.466 13:34:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.466 00:05:15.466 real 0m1.468s 00:05:15.466 user 0m1.324s 00:05:15.466 sys 0m0.146s 00:05:15.466 13:34:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.466 13:34:17 -- common/autotest_common.sh@10 -- # set +x 00:05:15.466 ************************************ 00:05:15.466 END TEST accel_copy_crc32c_C2 00:05:15.466 ************************************ 00:05:15.466 13:34:18 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:15.466 13:34:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:15.466 13:34:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.466 13:34:18 -- common/autotest_common.sh@10 -- # set +x 00:05:15.466 ************************************ 00:05:15.466 START TEST accel_dualcast 00:05:15.466 ************************************ 00:05:15.466 13:34:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:05:15.466 13:34:18 -- accel/accel.sh@16 -- # local accel_opc 00:05:15.466 13:34:18 -- accel/accel.sh@17 -- # local accel_module 00:05:15.466 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.466 13:34:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:15.466 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:15.466 13:34:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:15.466 13:34:18 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.466 13:34:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.466 13:34:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.466 13:34:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.466 13:34:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.466 13:34:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.466 13:34:18 -- accel/accel.sh@40 -- # local IFS=, 00:05:15.466 13:34:18 -- accel/accel.sh@41 -- # jq -r . 00:05:15.466 [2024-04-18 13:34:18.136482] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:15.466 [2024-04-18 13:34:18.136538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496786 ] 00:05:15.466 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.466 [2024-04-18 13:34:18.199557] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.725 [2024-04-18 13:34:18.316889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.725 13:34:18 -- accel/accel.sh@20 -- # val= 00:05:15.725 13:34:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:15.725 13:34:18 -- accel/accel.sh@20 -- # val= 00:05:15.725 13:34:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:15.725 13:34:18 -- accel/accel.sh@20 -- # val=0x1 00:05:15.725 13:34:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:15.725 13:34:18 -- accel/accel.sh@20 -- # val= 00:05:15.725 13:34:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:15.725 13:34:18 -- accel/accel.sh@20 -- # val= 00:05:15.725 13:34:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:15.725 13:34:18 -- accel/accel.sh@20 -- # val=dualcast 00:05:15.725 13:34:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.725 13:34:18 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:15.725 13:34:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:15.725 13:34:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:15.725 13:34:18 -- accel/accel.sh@20 -- # val= 00:05:15.725 13:34:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:15.725 13:34:18 -- accel/accel.sh@20 -- # val=software 00:05:15.725 13:34:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.725 13:34:18 -- accel/accel.sh@22 -- # accel_module=software 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:15.725 13:34:18 -- accel/accel.sh@20 -- # val=32 00:05:15.725 13:34:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:15.725 13:34:18 -- accel/accel.sh@20 -- # val=32 00:05:15.725 13:34:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:15.725 13:34:18 -- accel/accel.sh@20 -- # val=1 00:05:15.725 13:34:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:15.725 13:34:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.725 13:34:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:15.725 13:34:18 -- accel/accel.sh@20 -- # val=Yes 00:05:15.725 13:34:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:15.725 13:34:18 -- accel/accel.sh@20 -- # val= 00:05:15.725 13:34:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:15.725 13:34:18 -- accel/accel.sh@20 -- # val= 00:05:15.725 13:34:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # IFS=: 00:05:15.725 13:34:18 -- accel/accel.sh@19 -- # read -r var val 00:05:17.099 13:34:19 -- accel/accel.sh@20 -- # val= 00:05:17.099 13:34:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.099 13:34:19 -- accel/accel.sh@19 -- # IFS=: 00:05:17.099 13:34:19 -- accel/accel.sh@19 -- # read -r var val 00:05:17.099 13:34:19 -- accel/accel.sh@20 -- # val= 00:05:17.099 13:34:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.099 13:34:19 -- accel/accel.sh@19 -- # IFS=: 00:05:17.099 13:34:19 -- accel/accel.sh@19 -- # read -r var val 00:05:17.099 13:34:19 -- accel/accel.sh@20 -- # val= 00:05:17.099 13:34:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.099 13:34:19 -- accel/accel.sh@19 -- # IFS=: 00:05:17.099 13:34:19 -- accel/accel.sh@19 -- # read -r var val 00:05:17.099 13:34:19 -- accel/accel.sh@20 -- # val= 00:05:17.099 13:34:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.099 13:34:19 -- accel/accel.sh@19 -- # IFS=: 00:05:17.099 13:34:19 -- accel/accel.sh@19 -- # read -r var val 00:05:17.099 13:34:19 -- accel/accel.sh@20 -- # val= 00:05:17.099 13:34:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.099 13:34:19 -- accel/accel.sh@19 -- # IFS=: 00:05:17.099 13:34:19 -- accel/accel.sh@19 -- # read -r var val 00:05:17.099 13:34:19 -- accel/accel.sh@20 -- # val= 00:05:17.099 13:34:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.099 13:34:19 -- accel/accel.sh@19 -- # IFS=: 00:05:17.099 13:34:19 -- accel/accel.sh@19 -- # read -r var val 00:05:17.099 13:34:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.099 13:34:19 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:17.099 13:34:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.099 00:05:17.099 real 0m1.483s 00:05:17.099 user 0m1.346s 00:05:17.099 sys 0m0.137s 00:05:17.099 13:34:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.099 13:34:19 -- common/autotest_common.sh@10 -- # set +x 00:05:17.099 ************************************ 00:05:17.099 END TEST accel_dualcast 00:05:17.099 ************************************ 00:05:17.099 13:34:19 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:17.099 13:34:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:17.099 13:34:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.099 13:34:19 -- common/autotest_common.sh@10 -- # set +x 00:05:17.099 ************************************ 00:05:17.099 START TEST accel_compare 00:05:17.099 ************************************ 00:05:17.099 13:34:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:05:17.099 13:34:19 -- accel/accel.sh@16 -- # local accel_opc 00:05:17.099 13:34:19 -- accel/accel.sh@17 -- # local accel_module 00:05:17.099 13:34:19 -- accel/accel.sh@19 -- # IFS=: 00:05:17.099 13:34:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:17.099 13:34:19 -- accel/accel.sh@19 -- # read -r var val 00:05:17.099 13:34:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:17.099 13:34:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:17.099 13:34:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.099 13:34:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.099 13:34:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.099 13:34:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.099 13:34:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.099 13:34:19 -- accel/accel.sh@40 -- # local IFS=, 00:05:17.099 13:34:19 -- accel/accel.sh@41 -- # jq -r . 00:05:17.099 [2024-04-18 13:34:19.746350] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:17.099 [2024-04-18 13:34:19.746417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497066 ] 00:05:17.099 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.099 [2024-04-18 13:34:19.813356] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.358 [2024-04-18 13:34:19.934602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.358 13:34:19 -- accel/accel.sh@20 -- # val= 00:05:17.358 13:34:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.358 13:34:19 -- accel/accel.sh@19 -- # IFS=: 00:05:17.358 13:34:19 -- accel/accel.sh@19 -- # read -r var val 00:05:17.358 13:34:19 -- accel/accel.sh@20 -- # val= 00:05:17.358 13:34:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.358 13:34:19 -- accel/accel.sh@19 -- # IFS=: 00:05:17.358 13:34:19 -- accel/accel.sh@19 -- # read -r var val 00:05:17.358 13:34:19 -- accel/accel.sh@20 -- # val=0x1 00:05:17.358 13:34:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.358 13:34:19 -- accel/accel.sh@19 -- # IFS=: 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # read -r var val 00:05:17.358 13:34:20 -- accel/accel.sh@20 -- # val= 00:05:17.358 13:34:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # IFS=: 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # read -r var val 00:05:17.358 13:34:20 -- accel/accel.sh@20 -- # val= 00:05:17.358 13:34:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # IFS=: 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # read -r var val 00:05:17.358 13:34:20 -- accel/accel.sh@20 -- # val=compare 00:05:17.358 13:34:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.358 13:34:20 -- accel/accel.sh@23 -- # accel_opc=compare 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # IFS=: 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # read -r var val 00:05:17.358 13:34:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.358 13:34:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # IFS=: 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # read -r var val 00:05:17.358 13:34:20 -- accel/accel.sh@20 -- # val= 00:05:17.358 13:34:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # IFS=: 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # read -r var val 00:05:17.358 13:34:20 -- accel/accel.sh@20 -- # val=software 00:05:17.358 13:34:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.358 13:34:20 -- accel/accel.sh@22 -- # accel_module=software 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # IFS=: 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # read -r var val 00:05:17.358 13:34:20 -- accel/accel.sh@20 -- # val=32 00:05:17.358 13:34:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # IFS=: 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # read -r var val 00:05:17.358 13:34:20 -- accel/accel.sh@20 -- # val=32 00:05:17.358 13:34:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # IFS=: 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # read -r var val 00:05:17.358 13:34:20 -- accel/accel.sh@20 -- # val=1 00:05:17.358 13:34:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # IFS=: 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # read -r var val 00:05:17.358 13:34:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.358 13:34:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # IFS=: 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # read -r var val 00:05:17.358 13:34:20 -- accel/accel.sh@20 -- # val=Yes 00:05:17.358 13:34:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # IFS=: 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # read -r var val 00:05:17.358 13:34:20 -- accel/accel.sh@20 -- # val= 00:05:17.358 13:34:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # IFS=: 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # read -r var val 00:05:17.358 13:34:20 -- accel/accel.sh@20 -- # val= 00:05:17.358 13:34:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # IFS=: 00:05:17.358 13:34:20 -- accel/accel.sh@19 -- # read -r var val 00:05:18.733 13:34:21 -- accel/accel.sh@20 -- # val= 00:05:18.733 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.733 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.733 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.733 13:34:21 -- accel/accel.sh@20 -- # val= 00:05:18.733 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.733 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.733 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.733 13:34:21 -- accel/accel.sh@20 -- # val= 00:05:18.733 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.733 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.733 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.733 13:34:21 -- accel/accel.sh@20 -- # val= 00:05:18.733 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.733 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.733 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.733 13:34:21 -- accel/accel.sh@20 -- # val= 00:05:18.733 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.733 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.733 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.733 13:34:21 -- accel/accel.sh@20 -- # val= 00:05:18.733 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.733 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.733 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.734 13:34:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.734 13:34:21 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:18.734 13:34:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.734 00:05:18.734 real 0m1.494s 00:05:18.734 user 0m1.347s 00:05:18.734 sys 0m0.147s 00:05:18.734 13:34:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.734 13:34:21 -- common/autotest_common.sh@10 -- # set +x 00:05:18.734 ************************************ 00:05:18.734 END TEST accel_compare 00:05:18.734 ************************************ 00:05:18.734 13:34:21 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:18.734 13:34:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:18.734 13:34:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.734 13:34:21 -- common/autotest_common.sh@10 -- # set +x 00:05:18.734 ************************************ 00:05:18.734 START TEST accel_xor 00:05:18.734 ************************************ 00:05:18.734 13:34:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:05:18.734 13:34:21 -- accel/accel.sh@16 -- # local accel_opc 00:05:18.734 13:34:21 -- accel/accel.sh@17 -- # local accel_module 00:05:18.734 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.734 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.734 13:34:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:18.734 13:34:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:18.734 13:34:21 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.734 13:34:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.734 13:34:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.734 13:34:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.734 13:34:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.734 13:34:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.734 13:34:21 -- accel/accel.sh@40 -- # local IFS=, 00:05:18.734 13:34:21 -- accel/accel.sh@41 -- # jq -r . 00:05:18.734 [2024-04-18 13:34:21.364101] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:18.734 [2024-04-18 13:34:21.364166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497233 ] 00:05:18.734 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.734 [2024-04-18 13:34:21.431255] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.993 [2024-04-18 13:34:21.553980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val= 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val= 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val=0x1 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val= 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val= 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val=xor 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val=2 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val= 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val=software 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@22 -- # accel_module=software 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val=32 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val=32 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val=1 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val=Yes 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val= 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:18.993 13:34:21 -- accel/accel.sh@20 -- # val= 00:05:18.993 13:34:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # IFS=: 00:05:18.993 13:34:21 -- accel/accel.sh@19 -- # read -r var val 00:05:20.367 13:34:22 -- accel/accel.sh@20 -- # val= 00:05:20.368 13:34:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.368 13:34:22 -- accel/accel.sh@19 -- # IFS=: 00:05:20.368 13:34:22 -- accel/accel.sh@19 -- # read -r var val 00:05:20.368 13:34:22 -- accel/accel.sh@20 -- # val= 00:05:20.368 13:34:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.368 13:34:22 -- accel/accel.sh@19 -- # IFS=: 00:05:20.368 13:34:22 -- accel/accel.sh@19 -- # read -r var val 00:05:20.368 13:34:22 -- accel/accel.sh@20 -- # val= 00:05:20.368 13:34:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.368 13:34:22 -- accel/accel.sh@19 -- # IFS=: 00:05:20.368 13:34:22 -- accel/accel.sh@19 -- # read -r var val 00:05:20.368 13:34:22 -- accel/accel.sh@20 -- # val= 00:05:20.368 13:34:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.368 13:34:22 -- accel/accel.sh@19 -- # IFS=: 00:05:20.368 13:34:22 -- accel/accel.sh@19 -- # read -r var val 00:05:20.368 13:34:22 -- accel/accel.sh@20 -- # val= 00:05:20.368 13:34:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.368 13:34:22 -- accel/accel.sh@19 -- # IFS=: 00:05:20.368 13:34:22 -- accel/accel.sh@19 -- # read -r var val 00:05:20.368 13:34:22 -- accel/accel.sh@20 -- # val= 00:05:20.368 13:34:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.368 13:34:22 -- accel/accel.sh@19 -- # IFS=: 00:05:20.368 13:34:22 -- accel/accel.sh@19 -- # read -r var val 00:05:20.368 13:34:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.368 13:34:22 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:20.368 13:34:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.368 00:05:20.368 real 0m1.497s 00:05:20.368 user 0m1.342s 00:05:20.368 sys 0m0.156s 00:05:20.368 13:34:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.368 13:34:22 -- common/autotest_common.sh@10 -- # set +x 00:05:20.368 ************************************ 00:05:20.368 END TEST accel_xor 00:05:20.368 ************************************ 00:05:20.368 13:34:22 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:20.368 13:34:22 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:20.368 13:34:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.368 13:34:22 -- common/autotest_common.sh@10 -- # set +x 00:05:20.368 ************************************ 00:05:20.368 START TEST accel_xor 00:05:20.368 ************************************ 00:05:20.368 13:34:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:05:20.368 13:34:22 -- accel/accel.sh@16 -- # local accel_opc 00:05:20.368 13:34:22 -- accel/accel.sh@17 -- # local accel_module 00:05:20.368 13:34:22 -- accel/accel.sh@19 -- # IFS=: 00:05:20.368 13:34:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:20.368 13:34:22 -- accel/accel.sh@19 -- # read -r var val 00:05:20.368 13:34:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:20.368 13:34:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.368 13:34:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.368 13:34:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.368 13:34:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.368 13:34:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.368 13:34:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.368 13:34:22 -- accel/accel.sh@40 -- # local IFS=, 00:05:20.368 13:34:22 -- accel/accel.sh@41 -- # jq -r . 00:05:20.368 [2024-04-18 13:34:22.988029] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:20.368 [2024-04-18 13:34:22.988090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497441 ] 00:05:20.368 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.368 [2024-04-18 13:34:23.054534] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.626 [2024-04-18 13:34:23.176420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val= 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val= 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val=0x1 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val= 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val= 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val=xor 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val=3 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val= 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val=software 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@22 -- # accel_module=software 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val=32 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val=32 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val=1 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val=Yes 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val= 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:20.626 13:34:23 -- accel/accel.sh@20 -- # val= 00:05:20.626 13:34:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # IFS=: 00:05:20.626 13:34:23 -- accel/accel.sh@19 -- # read -r var val 00:05:21.999 13:34:24 -- accel/accel.sh@20 -- # val= 00:05:21.999 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.999 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:21.999 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:21.999 13:34:24 -- accel/accel.sh@20 -- # val= 00:05:21.999 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.999 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:21.999 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:21.999 13:34:24 -- accel/accel.sh@20 -- # val= 00:05:21.999 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.999 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:21.999 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:21.999 13:34:24 -- accel/accel.sh@20 -- # val= 00:05:21.999 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.999 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:21.999 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:21.999 13:34:24 -- accel/accel.sh@20 -- # val= 00:05:22.000 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.000 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.000 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.000 13:34:24 -- accel/accel.sh@20 -- # val= 00:05:22.000 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.000 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.000 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.000 13:34:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:22.000 13:34:24 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:22.000 13:34:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.000 00:05:22.000 real 0m1.490s 00:05:22.000 user 0m1.345s 00:05:22.000 sys 0m0.146s 00:05:22.000 13:34:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.000 13:34:24 -- common/autotest_common.sh@10 -- # set +x 00:05:22.000 ************************************ 00:05:22.000 END TEST accel_xor 00:05:22.000 ************************************ 00:05:22.000 13:34:24 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:22.000 13:34:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:22.000 13:34:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.000 13:34:24 -- common/autotest_common.sh@10 -- # set +x 00:05:22.000 ************************************ 00:05:22.000 START TEST accel_dif_verify 00:05:22.000 ************************************ 00:05:22.000 13:34:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:05:22.000 13:34:24 -- accel/accel.sh@16 -- # local accel_opc 00:05:22.000 13:34:24 -- accel/accel.sh@17 -- # local accel_module 00:05:22.000 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.000 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.000 13:34:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:22.000 13:34:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:22.000 13:34:24 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.000 13:34:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.000 13:34:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.000 13:34:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.000 13:34:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.000 13:34:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.000 13:34:24 -- accel/accel.sh@40 -- # local IFS=, 00:05:22.000 13:34:24 -- accel/accel.sh@41 -- # jq -r . 00:05:22.000 [2024-04-18 13:34:24.601933] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:22.000 [2024-04-18 13:34:24.601999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497677 ] 00:05:22.000 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.000 [2024-04-18 13:34:24.665151] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.000 [2024-04-18 13:34:24.783556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val= 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val= 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val=0x1 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val= 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val= 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val=dif_verify 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val= 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val=software 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@22 -- # accel_module=software 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val=32 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val=32 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val=1 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val=No 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val= 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:22.259 13:34:24 -- accel/accel.sh@20 -- # val= 00:05:22.259 13:34:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # IFS=: 00:05:22.259 13:34:24 -- accel/accel.sh@19 -- # read -r var val 00:05:23.633 13:34:26 -- accel/accel.sh@20 -- # val= 00:05:23.633 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.633 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.633 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.633 13:34:26 -- accel/accel.sh@20 -- # val= 00:05:23.633 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.633 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.633 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.633 13:34:26 -- accel/accel.sh@20 -- # val= 00:05:23.633 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.633 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.633 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.633 13:34:26 -- accel/accel.sh@20 -- # val= 00:05:23.633 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.633 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.633 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.633 13:34:26 -- accel/accel.sh@20 -- # val= 00:05:23.633 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.633 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.633 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.633 13:34:26 -- accel/accel.sh@20 -- # val= 00:05:23.633 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.633 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.633 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.633 13:34:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.633 13:34:26 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:23.633 13:34:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.633 00:05:23.633 real 0m1.491s 00:05:23.633 user 0m1.349s 00:05:23.633 sys 0m0.145s 00:05:23.633 13:34:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.633 13:34:26 -- common/autotest_common.sh@10 -- # set +x 00:05:23.633 ************************************ 00:05:23.633 END TEST accel_dif_verify 00:05:23.633 ************************************ 00:05:23.633 13:34:26 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:23.633 13:34:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:23.633 13:34:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.633 13:34:26 -- common/autotest_common.sh@10 -- # set +x 00:05:23.633 ************************************ 00:05:23.633 START TEST accel_dif_generate 00:05:23.633 ************************************ 00:05:23.633 13:34:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:05:23.633 13:34:26 -- accel/accel.sh@16 -- # local accel_opc 00:05:23.633 13:34:26 -- accel/accel.sh@17 -- # local accel_module 00:05:23.633 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.633 13:34:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:23.633 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.633 13:34:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:23.633 13:34:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:23.633 13:34:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.633 13:34:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.633 13:34:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.633 13:34:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.633 13:34:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.633 13:34:26 -- accel/accel.sh@40 -- # local IFS=, 00:05:23.633 13:34:26 -- accel/accel.sh@41 -- # jq -r . 00:05:23.633 [2024-04-18 13:34:26.219986] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:23.633 [2024-04-18 13:34:26.220049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497848 ] 00:05:23.633 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.633 [2024-04-18 13:34:26.285756] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.633 [2024-04-18 13:34:26.403683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val= 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val= 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val=0x1 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val= 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val= 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val=dif_generate 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val= 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val=software 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@22 -- # accel_module=software 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val=32 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val=32 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val=1 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val=No 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val= 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:23.891 13:34:26 -- accel/accel.sh@20 -- # val= 00:05:23.891 13:34:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # IFS=: 00:05:23.891 13:34:26 -- accel/accel.sh@19 -- # read -r var val 00:05:25.314 13:34:27 -- accel/accel.sh@20 -- # val= 00:05:25.314 13:34:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.314 13:34:27 -- accel/accel.sh@19 -- # IFS=: 00:05:25.314 13:34:27 -- accel/accel.sh@19 -- # read -r var val 00:05:25.314 13:34:27 -- accel/accel.sh@20 -- # val= 00:05:25.314 13:34:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.314 13:34:27 -- accel/accel.sh@19 -- # IFS=: 00:05:25.314 13:34:27 -- accel/accel.sh@19 -- # read -r var val 00:05:25.314 13:34:27 -- accel/accel.sh@20 -- # val= 00:05:25.314 13:34:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.314 13:34:27 -- accel/accel.sh@19 -- # IFS=: 00:05:25.314 13:34:27 -- accel/accel.sh@19 -- # read -r var val 00:05:25.314 13:34:27 -- accel/accel.sh@20 -- # val= 00:05:25.314 13:34:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.314 13:34:27 -- accel/accel.sh@19 -- # IFS=: 00:05:25.314 13:34:27 -- accel/accel.sh@19 -- # read -r var val 00:05:25.314 13:34:27 -- accel/accel.sh@20 -- # val= 00:05:25.314 13:34:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.314 13:34:27 -- accel/accel.sh@19 -- # IFS=: 00:05:25.314 13:34:27 -- accel/accel.sh@19 -- # read -r var val 00:05:25.314 13:34:27 -- accel/accel.sh@20 -- # val= 00:05:25.314 13:34:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.314 13:34:27 -- accel/accel.sh@19 -- # IFS=: 00:05:25.314 13:34:27 -- accel/accel.sh@19 -- # read -r var val 00:05:25.314 13:34:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.314 13:34:27 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:25.314 13:34:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.314 00:05:25.314 real 0m1.481s 00:05:25.314 user 0m1.333s 00:05:25.314 sys 0m0.152s 00:05:25.314 13:34:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:25.314 13:34:27 -- common/autotest_common.sh@10 -- # set +x 00:05:25.314 ************************************ 00:05:25.314 END TEST accel_dif_generate 00:05:25.314 ************************************ 00:05:25.314 13:34:27 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:25.314 13:34:27 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:25.314 13:34:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.314 13:34:27 -- common/autotest_common.sh@10 -- # set +x 00:05:25.314 ************************************ 00:05:25.314 START TEST accel_dif_generate_copy 00:05:25.314 ************************************ 00:05:25.314 13:34:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:05:25.314 13:34:27 -- accel/accel.sh@16 -- # local accel_opc 00:05:25.314 13:34:27 -- accel/accel.sh@17 -- # local accel_module 00:05:25.314 13:34:27 -- accel/accel.sh@19 -- # IFS=: 00:05:25.314 13:34:27 -- accel/accel.sh@19 -- # read -r var val 00:05:25.314 13:34:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:25.314 13:34:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:25.314 13:34:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.314 13:34:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.314 13:34:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.314 13:34:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.314 13:34:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.314 13:34:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.314 13:34:27 -- accel/accel.sh@40 -- # local IFS=, 00:05:25.314 13:34:27 -- accel/accel.sh@41 -- # jq -r . 00:05:25.314 [2024-04-18 13:34:27.825055] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:25.314 [2024-04-18 13:34:27.825126] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498130 ] 00:05:25.314 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.314 [2024-04-18 13:34:27.887347] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.315 [2024-04-18 13:34:28.008930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val= 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val= 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val=0x1 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val= 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val= 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val= 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val=software 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@22 -- # accel_module=software 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val=32 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val=32 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val=1 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val=No 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val= 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:25.315 13:34:28 -- accel/accel.sh@20 -- # val= 00:05:25.315 13:34:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # IFS=: 00:05:25.315 13:34:28 -- accel/accel.sh@19 -- # read -r var val 00:05:26.689 13:34:29 -- accel/accel.sh@20 -- # val= 00:05:26.689 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.689 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.689 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.689 13:34:29 -- accel/accel.sh@20 -- # val= 00:05:26.689 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.689 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.689 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.689 13:34:29 -- accel/accel.sh@20 -- # val= 00:05:26.689 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.689 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.689 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.689 13:34:29 -- accel/accel.sh@20 -- # val= 00:05:26.689 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.689 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.689 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.689 13:34:29 -- accel/accel.sh@20 -- # val= 00:05:26.689 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.689 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.689 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.689 13:34:29 -- accel/accel.sh@20 -- # val= 00:05:26.689 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.689 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.689 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.689 13:34:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.689 13:34:29 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:26.689 13:34:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.689 00:05:26.689 real 0m1.491s 00:05:26.689 user 0m1.347s 00:05:26.689 sys 0m0.146s 00:05:26.689 13:34:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:26.689 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:05:26.689 ************************************ 00:05:26.689 END TEST accel_dif_generate_copy 00:05:26.690 ************************************ 00:05:26.690 13:34:29 -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:26.690 13:34:29 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:26.690 13:34:29 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:26.690 13:34:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.690 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:05:26.690 ************************************ 00:05:26.690 START TEST accel_comp 00:05:26.690 ************************************ 00:05:26.690 13:34:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:26.690 13:34:29 -- accel/accel.sh@16 -- # local accel_opc 00:05:26.690 13:34:29 -- accel/accel.sh@17 -- # local accel_module 00:05:26.690 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.690 13:34:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:26.690 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.690 13:34:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:26.690 13:34:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.690 13:34:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.690 13:34:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.690 13:34:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.690 13:34:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.690 13:34:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.690 13:34:29 -- accel/accel.sh@40 -- # local IFS=, 00:05:26.690 13:34:29 -- accel/accel.sh@41 -- # jq -r . 00:05:26.690 [2024-04-18 13:34:29.442080] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:26.690 [2024-04-18 13:34:29.442140] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498297 ] 00:05:26.690 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.948 [2024-04-18 13:34:29.506513] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.948 [2024-04-18 13:34:29.627480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val= 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val= 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val= 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val=0x1 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val= 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val= 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val=compress 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@23 -- # accel_opc=compress 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val= 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val=software 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@22 -- # accel_module=software 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val=32 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val=32 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val=1 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val=No 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val= 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:26.948 13:34:29 -- accel/accel.sh@20 -- # val= 00:05:26.948 13:34:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # IFS=: 00:05:26.948 13:34:29 -- accel/accel.sh@19 -- # read -r var val 00:05:28.322 13:34:30 -- accel/accel.sh@20 -- # val= 00:05:28.322 13:34:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.322 13:34:30 -- accel/accel.sh@19 -- # IFS=: 00:05:28.322 13:34:30 -- accel/accel.sh@19 -- # read -r var val 00:05:28.322 13:34:30 -- accel/accel.sh@20 -- # val= 00:05:28.322 13:34:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.322 13:34:30 -- accel/accel.sh@19 -- # IFS=: 00:05:28.322 13:34:30 -- accel/accel.sh@19 -- # read -r var val 00:05:28.322 13:34:30 -- accel/accel.sh@20 -- # val= 00:05:28.322 13:34:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.322 13:34:30 -- accel/accel.sh@19 -- # IFS=: 00:05:28.322 13:34:30 -- accel/accel.sh@19 -- # read -r var val 00:05:28.322 13:34:30 -- accel/accel.sh@20 -- # val= 00:05:28.322 13:34:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.322 13:34:30 -- accel/accel.sh@19 -- # IFS=: 00:05:28.322 13:34:30 -- accel/accel.sh@19 -- # read -r var val 00:05:28.322 13:34:30 -- accel/accel.sh@20 -- # val= 00:05:28.322 13:34:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.322 13:34:30 -- accel/accel.sh@19 -- # IFS=: 00:05:28.322 13:34:30 -- accel/accel.sh@19 -- # read -r var val 00:05:28.322 13:34:30 -- accel/accel.sh@20 -- # val= 00:05:28.322 13:34:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.322 13:34:30 -- accel/accel.sh@19 -- # IFS=: 00:05:28.322 13:34:30 -- accel/accel.sh@19 -- # read -r var val 00:05:28.322 13:34:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.322 13:34:30 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:28.322 13:34:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.322 00:05:28.322 real 0m1.497s 00:05:28.322 user 0m1.353s 00:05:28.322 sys 0m0.146s 00:05:28.322 13:34:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:28.322 13:34:30 -- common/autotest_common.sh@10 -- # set +x 00:05:28.322 ************************************ 00:05:28.322 END TEST accel_comp 00:05:28.322 ************************************ 00:05:28.322 13:34:30 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:28.322 13:34:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:28.322 13:34:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.322 13:34:30 -- common/autotest_common.sh@10 -- # set +x 00:05:28.322 ************************************ 00:05:28.322 START TEST accel_decomp 00:05:28.322 ************************************ 00:05:28.322 13:34:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:28.322 13:34:31 -- accel/accel.sh@16 -- # local accel_opc 00:05:28.322 13:34:31 -- accel/accel.sh@17 -- # local accel_module 00:05:28.322 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.322 13:34:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:28.322 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.322 13:34:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:28.322 13:34:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:28.322 13:34:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.322 13:34:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.322 13:34:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.322 13:34:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.322 13:34:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.322 13:34:31 -- accel/accel.sh@40 -- # local IFS=, 00:05:28.322 13:34:31 -- accel/accel.sh@41 -- # jq -r . 00:05:28.322 [2024-04-18 13:34:31.064924] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:28.322 [2024-04-18 13:34:31.064991] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498575 ] 00:05:28.322 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.322 [2024-04-18 13:34:31.126884] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.580 [2024-04-18 13:34:31.246802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val= 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val= 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val= 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val=0x1 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val= 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val= 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val=decompress 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val= 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val=software 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@22 -- # accel_module=software 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val=32 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val=32 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val=1 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val=Yes 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val= 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:28.580 13:34:31 -- accel/accel.sh@20 -- # val= 00:05:28.580 13:34:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # IFS=: 00:05:28.580 13:34:31 -- accel/accel.sh@19 -- # read -r var val 00:05:29.951 13:34:32 -- accel/accel.sh@20 -- # val= 00:05:29.951 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.951 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:29.951 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:29.951 13:34:32 -- accel/accel.sh@20 -- # val= 00:05:29.951 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.951 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:29.951 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:29.951 13:34:32 -- accel/accel.sh@20 -- # val= 00:05:29.951 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.951 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:29.951 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:29.951 13:34:32 -- accel/accel.sh@20 -- # val= 00:05:29.951 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.951 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:29.951 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:29.951 13:34:32 -- accel/accel.sh@20 -- # val= 00:05:29.951 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.951 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:29.951 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:29.951 13:34:32 -- accel/accel.sh@20 -- # val= 00:05:29.951 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.951 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:29.951 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:29.951 13:34:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.951 13:34:32 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:29.951 13:34:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.951 00:05:29.951 real 0m1.484s 00:05:29.951 user 0m1.343s 00:05:29.951 sys 0m0.144s 00:05:29.951 13:34:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:29.951 13:34:32 -- common/autotest_common.sh@10 -- # set +x 00:05:29.951 ************************************ 00:05:29.951 END TEST accel_decomp 00:05:29.951 ************************************ 00:05:29.951 13:34:32 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:29.951 13:34:32 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:29.951 13:34:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.951 13:34:32 -- common/autotest_common.sh@10 -- # set +x 00:05:29.951 ************************************ 00:05:29.951 START TEST accel_decmop_full 00:05:29.951 ************************************ 00:05:29.951 13:34:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:29.951 13:34:32 -- accel/accel.sh@16 -- # local accel_opc 00:05:29.951 13:34:32 -- accel/accel.sh@17 -- # local accel_module 00:05:29.951 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:29.951 13:34:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:29.951 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:29.951 13:34:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:29.951 13:34:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:29.951 13:34:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.951 13:34:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.951 13:34:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.951 13:34:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.951 13:34:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.951 13:34:32 -- accel/accel.sh@40 -- # local IFS=, 00:05:29.951 13:34:32 -- accel/accel.sh@41 -- # jq -r . 00:05:29.951 [2024-04-18 13:34:32.675904] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:29.951 [2024-04-18 13:34:32.675971] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498741 ] 00:05:29.951 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.951 [2024-04-18 13:34:32.740481] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.209 [2024-04-18 13:34:32.859262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.209 13:34:32 -- accel/accel.sh@20 -- # val= 00:05:30.209 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.209 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.209 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.209 13:34:32 -- accel/accel.sh@20 -- # val= 00:05:30.209 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.209 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.209 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.209 13:34:32 -- accel/accel.sh@20 -- # val= 00:05:30.209 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 13:34:32 -- accel/accel.sh@20 -- # val=0x1 00:05:30.210 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 13:34:32 -- accel/accel.sh@20 -- # val= 00:05:30.210 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 13:34:32 -- accel/accel.sh@20 -- # val= 00:05:30.210 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 13:34:32 -- accel/accel.sh@20 -- # val=decompress 00:05:30.210 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 13:34:32 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 13:34:32 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:30.210 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 13:34:32 -- accel/accel.sh@20 -- # val= 00:05:30.210 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 13:34:32 -- accel/accel.sh@20 -- # val=software 00:05:30.210 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 13:34:32 -- accel/accel.sh@22 -- # accel_module=software 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 13:34:32 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:30.210 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 13:34:32 -- accel/accel.sh@20 -- # val=32 00:05:30.210 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 13:34:32 -- accel/accel.sh@20 -- # val=32 00:05:30.210 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 13:34:32 -- accel/accel.sh@20 -- # val=1 00:05:30.210 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 13:34:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.210 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 13:34:32 -- accel/accel.sh@20 -- # val=Yes 00:05:30.210 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 13:34:32 -- accel/accel.sh@20 -- # val= 00:05:30.210 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:30.210 13:34:32 -- accel/accel.sh@20 -- # val= 00:05:30.210 13:34:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # IFS=: 00:05:30.210 13:34:32 -- accel/accel.sh@19 -- # read -r var val 00:05:31.584 13:34:34 -- accel/accel.sh@20 -- # val= 00:05:31.584 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.584 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.584 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.584 13:34:34 -- accel/accel.sh@20 -- # val= 00:05:31.584 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.584 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.584 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.584 13:34:34 -- accel/accel.sh@20 -- # val= 00:05:31.584 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.584 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.584 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.584 13:34:34 -- accel/accel.sh@20 -- # val= 00:05:31.584 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.584 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.584 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.584 13:34:34 -- accel/accel.sh@20 -- # val= 00:05:31.584 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.584 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.584 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.584 13:34:34 -- accel/accel.sh@20 -- # val= 00:05:31.584 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.584 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.584 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.584 13:34:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.584 13:34:34 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:31.584 13:34:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.584 00:05:31.584 real 0m1.494s 00:05:31.584 user 0m1.355s 00:05:31.584 sys 0m0.141s 00:05:31.584 13:34:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.584 13:34:34 -- common/autotest_common.sh@10 -- # set +x 00:05:31.584 ************************************ 00:05:31.584 END TEST accel_decmop_full 00:05:31.584 ************************************ 00:05:31.584 13:34:34 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:31.584 13:34:34 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:31.584 13:34:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.584 13:34:34 -- common/autotest_common.sh@10 -- # set +x 00:05:31.584 ************************************ 00:05:31.584 START TEST accel_decomp_mcore 00:05:31.584 ************************************ 00:05:31.584 13:34:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:31.584 13:34:34 -- accel/accel.sh@16 -- # local accel_opc 00:05:31.584 13:34:34 -- accel/accel.sh@17 -- # local accel_module 00:05:31.584 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.584 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.584 13:34:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:31.584 13:34:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:31.584 13:34:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.584 13:34:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.584 13:34:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.584 13:34:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.584 13:34:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.584 13:34:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.584 13:34:34 -- accel/accel.sh@40 -- # local IFS=, 00:05:31.584 13:34:34 -- accel/accel.sh@41 -- # jq -r . 00:05:31.584 [2024-04-18 13:34:34.296706] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:31.584 [2024-04-18 13:34:34.296771] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498913 ] 00:05:31.584 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.584 [2024-04-18 13:34:34.364275] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.843 [2024-04-18 13:34:34.489916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.843 [2024-04-18 13:34:34.489968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.843 [2024-04-18 13:34:34.490022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.843 [2024-04-18 13:34:34.490026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.843 13:34:34 -- accel/accel.sh@20 -- # val= 00:05:31.843 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.843 13:34:34 -- accel/accel.sh@20 -- # val= 00:05:31.843 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.843 13:34:34 -- accel/accel.sh@20 -- # val= 00:05:31.843 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.843 13:34:34 -- accel/accel.sh@20 -- # val=0xf 00:05:31.843 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.843 13:34:34 -- accel/accel.sh@20 -- # val= 00:05:31.843 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.843 13:34:34 -- accel/accel.sh@20 -- # val= 00:05:31.843 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.843 13:34:34 -- accel/accel.sh@20 -- # val=decompress 00:05:31.843 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.843 13:34:34 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.843 13:34:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.843 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.843 13:34:34 -- accel/accel.sh@20 -- # val= 00:05:31.843 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.843 13:34:34 -- accel/accel.sh@20 -- # val=software 00:05:31.843 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.843 13:34:34 -- accel/accel.sh@22 -- # accel_module=software 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.843 13:34:34 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:31.843 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.843 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.844 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.844 13:34:34 -- accel/accel.sh@20 -- # val=32 00:05:31.844 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.844 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.844 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.844 13:34:34 -- accel/accel.sh@20 -- # val=32 00:05:31.844 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.844 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.844 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.844 13:34:34 -- accel/accel.sh@20 -- # val=1 00:05:31.844 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.844 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.844 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.844 13:34:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.844 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.844 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.844 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.844 13:34:34 -- accel/accel.sh@20 -- # val=Yes 00:05:31.844 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.844 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.844 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.844 13:34:34 -- accel/accel.sh@20 -- # val= 00:05:31.844 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.844 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.844 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:31.844 13:34:34 -- accel/accel.sh@20 -- # val= 00:05:31.844 13:34:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.844 13:34:34 -- accel/accel.sh@19 -- # IFS=: 00:05:31.844 13:34:34 -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 13:34:35 -- accel/accel.sh@20 -- # val= 00:05:33.217 13:34:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 13:34:35 -- accel/accel.sh@20 -- # val= 00:05:33.217 13:34:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 13:34:35 -- accel/accel.sh@20 -- # val= 00:05:33.217 13:34:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 13:34:35 -- accel/accel.sh@20 -- # val= 00:05:33.217 13:34:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 13:34:35 -- accel/accel.sh@20 -- # val= 00:05:33.217 13:34:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 13:34:35 -- accel/accel.sh@20 -- # val= 00:05:33.217 13:34:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 13:34:35 -- accel/accel.sh@20 -- # val= 00:05:33.217 13:34:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 13:34:35 -- accel/accel.sh@20 -- # val= 00:05:33.217 13:34:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 13:34:35 -- accel/accel.sh@20 -- # val= 00:05:33.217 13:34:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 13:34:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.217 13:34:35 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:33.217 13:34:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.217 00:05:33.217 real 0m1.507s 00:05:33.217 user 0m4.834s 00:05:33.217 sys 0m0.151s 00:05:33.217 13:34:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.217 13:34:35 -- common/autotest_common.sh@10 -- # set +x 00:05:33.217 ************************************ 00:05:33.217 END TEST accel_decomp_mcore 00:05:33.217 ************************************ 00:05:33.217 13:34:35 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:33.217 13:34:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:33.217 13:34:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.217 13:34:35 -- common/autotest_common.sh@10 -- # set +x 00:05:33.217 ************************************ 00:05:33.217 START TEST accel_decomp_full_mcore 00:05:33.217 ************************************ 00:05:33.217 13:34:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:33.217 13:34:35 -- accel/accel.sh@16 -- # local accel_opc 00:05:33.217 13:34:35 -- accel/accel.sh@17 -- # local accel_module 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # IFS=: 00:05:33.217 13:34:35 -- accel/accel.sh@19 -- # read -r var val 00:05:33.217 13:34:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:33.217 13:34:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:33.217 13:34:35 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.217 13:34:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.217 13:34:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.217 13:34:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.217 13:34:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.217 13:34:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.217 13:34:35 -- accel/accel.sh@40 -- # local IFS=, 00:05:33.217 13:34:35 -- accel/accel.sh@41 -- # jq -r . 00:05:33.217 [2024-04-18 13:34:35.931580] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:33.217 [2024-04-18 13:34:35.931648] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499197 ] 00:05:33.217 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.217 [2024-04-18 13:34:35.996217] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.475 [2024-04-18 13:34:36.121902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.475 [2024-04-18 13:34:36.121958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.475 [2024-04-18 13:34:36.122012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.475 [2024-04-18 13:34:36.122015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.475 13:34:36 -- accel/accel.sh@20 -- # val= 00:05:33.475 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.475 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.475 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.475 13:34:36 -- accel/accel.sh@20 -- # val= 00:05:33.475 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.475 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.475 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.475 13:34:36 -- accel/accel.sh@20 -- # val= 00:05:33.475 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.475 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.475 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.475 13:34:36 -- accel/accel.sh@20 -- # val=0xf 00:05:33.475 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.475 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.475 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.475 13:34:36 -- accel/accel.sh@20 -- # val= 00:05:33.475 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.475 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.475 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.475 13:34:36 -- accel/accel.sh@20 -- # val= 00:05:33.475 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.475 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.475 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.475 13:34:36 -- accel/accel.sh@20 -- # val=decompress 00:05:33.475 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.475 13:34:36 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:33.475 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.475 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.475 13:34:36 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:33.475 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.475 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 13:34:36 -- accel/accel.sh@20 -- # val= 00:05:33.476 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 13:34:36 -- accel/accel.sh@20 -- # val=software 00:05:33.476 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 13:34:36 -- accel/accel.sh@22 -- # accel_module=software 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 13:34:36 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:33.476 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 13:34:36 -- accel/accel.sh@20 -- # val=32 00:05:33.476 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 13:34:36 -- accel/accel.sh@20 -- # val=32 00:05:33.476 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 13:34:36 -- accel/accel.sh@20 -- # val=1 00:05:33.476 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 13:34:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.476 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 13:34:36 -- accel/accel.sh@20 -- # val=Yes 00:05:33.476 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 13:34:36 -- accel/accel.sh@20 -- # val= 00:05:33.476 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:33.476 13:34:36 -- accel/accel.sh@20 -- # val= 00:05:33.476 13:34:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # IFS=: 00:05:33.476 13:34:36 -- accel/accel.sh@19 -- # read -r var val 00:05:34.847 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:34.847 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:34.847 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:34.847 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:34.847 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:34.847 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:34.847 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:34.847 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:34.847 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:34.847 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:34.847 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:34.847 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:34.847 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:34.847 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:34.847 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:34.847 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:34.847 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:34.847 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:34.847 13:34:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.847 13:34:37 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:34.847 13:34:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.847 00:05:34.847 real 0m1.519s 00:05:34.847 user 0m4.876s 00:05:34.847 sys 0m0.166s 00:05:34.847 13:34:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.847 13:34:37 -- common/autotest_common.sh@10 -- # set +x 00:05:34.847 ************************************ 00:05:34.847 END TEST accel_decomp_full_mcore 00:05:34.847 ************************************ 00:05:34.847 13:34:37 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:34.847 13:34:37 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:34.847 13:34:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.847 13:34:37 -- common/autotest_common.sh@10 -- # set +x 00:05:34.847 ************************************ 00:05:34.847 START TEST accel_decomp_mthread 00:05:34.847 ************************************ 00:05:34.847 13:34:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:34.847 13:34:37 -- accel/accel.sh@16 -- # local accel_opc 00:05:34.847 13:34:37 -- accel/accel.sh@17 -- # local accel_module 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:34.847 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:34.847 13:34:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:34.848 13:34:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:34.848 13:34:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.848 13:34:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.848 13:34:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.848 13:34:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.848 13:34:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.848 13:34:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.848 13:34:37 -- accel/accel.sh@40 -- # local IFS=, 00:05:34.848 13:34:37 -- accel/accel.sh@41 -- # jq -r . 00:05:34.848 [2024-04-18 13:34:37.576061] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:34.848 [2024-04-18 13:34:37.576122] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499362 ] 00:05:34.848 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.848 [2024-04-18 13:34:37.639714] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.106 [2024-04-18 13:34:37.760618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val=0x1 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val=decompress 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val=software 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@22 -- # accel_module=software 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val=32 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val=32 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val=2 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val=Yes 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:35.106 13:34:37 -- accel/accel.sh@20 -- # val= 00:05:35.106 13:34:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # IFS=: 00:05:35.106 13:34:37 -- accel/accel.sh@19 -- # read -r var val 00:05:36.479 13:34:39 -- accel/accel.sh@20 -- # val= 00:05:36.479 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.479 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.479 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.479 13:34:39 -- accel/accel.sh@20 -- # val= 00:05:36.479 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.479 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.479 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.479 13:34:39 -- accel/accel.sh@20 -- # val= 00:05:36.479 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.479 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.479 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.479 13:34:39 -- accel/accel.sh@20 -- # val= 00:05:36.479 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.479 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.479 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.479 13:34:39 -- accel/accel.sh@20 -- # val= 00:05:36.479 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.479 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.479 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.479 13:34:39 -- accel/accel.sh@20 -- # val= 00:05:36.479 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.479 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.479 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.479 13:34:39 -- accel/accel.sh@20 -- # val= 00:05:36.479 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.479 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.479 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.479 13:34:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.479 13:34:39 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:36.479 13:34:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.479 00:05:36.479 real 0m1.495s 00:05:36.479 user 0m1.345s 00:05:36.479 sys 0m0.152s 00:05:36.479 13:34:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:36.479 13:34:39 -- common/autotest_common.sh@10 -- # set +x 00:05:36.479 ************************************ 00:05:36.479 END TEST accel_decomp_mthread 00:05:36.479 ************************************ 00:05:36.479 13:34:39 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:36.479 13:34:39 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:36.479 13:34:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.479 13:34:39 -- common/autotest_common.sh@10 -- # set +x 00:05:36.479 ************************************ 00:05:36.479 START TEST accel_deomp_full_mthread 00:05:36.479 ************************************ 00:05:36.479 13:34:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:36.479 13:34:39 -- accel/accel.sh@16 -- # local accel_opc 00:05:36.479 13:34:39 -- accel/accel.sh@17 -- # local accel_module 00:05:36.479 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.479 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.479 13:34:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:36.479 13:34:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:36.479 13:34:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.479 13:34:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.479 13:34:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.479 13:34:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.479 13:34:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.479 13:34:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.479 13:34:39 -- accel/accel.sh@40 -- # local IFS=, 00:05:36.479 13:34:39 -- accel/accel.sh@41 -- # jq -r . 00:05:36.479 [2024-04-18 13:34:39.201994] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:36.479 [2024-04-18 13:34:39.202060] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499643 ] 00:05:36.479 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.479 [2024-04-18 13:34:39.269273] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.738 [2024-04-18 13:34:39.390038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val= 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val= 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val= 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val=0x1 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val= 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val= 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val=decompress 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val= 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val=software 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@22 -- # accel_module=software 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val=32 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val=32 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val=2 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val=Yes 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val= 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:36.738 13:34:39 -- accel/accel.sh@20 -- # val= 00:05:36.738 13:34:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # IFS=: 00:05:36.738 13:34:39 -- accel/accel.sh@19 -- # read -r var val 00:05:38.115 13:34:40 -- accel/accel.sh@20 -- # val= 00:05:38.115 13:34:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.115 13:34:40 -- accel/accel.sh@19 -- # IFS=: 00:05:38.115 13:34:40 -- accel/accel.sh@19 -- # read -r var val 00:05:38.115 13:34:40 -- accel/accel.sh@20 -- # val= 00:05:38.115 13:34:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.115 13:34:40 -- accel/accel.sh@19 -- # IFS=: 00:05:38.115 13:34:40 -- accel/accel.sh@19 -- # read -r var val 00:05:38.115 13:34:40 -- accel/accel.sh@20 -- # val= 00:05:38.115 13:34:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.115 13:34:40 -- accel/accel.sh@19 -- # IFS=: 00:05:38.115 13:34:40 -- accel/accel.sh@19 -- # read -r var val 00:05:38.115 13:34:40 -- accel/accel.sh@20 -- # val= 00:05:38.115 13:34:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.115 13:34:40 -- accel/accel.sh@19 -- # IFS=: 00:05:38.115 13:34:40 -- accel/accel.sh@19 -- # read -r var val 00:05:38.115 13:34:40 -- accel/accel.sh@20 -- # val= 00:05:38.115 13:34:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.115 13:34:40 -- accel/accel.sh@19 -- # IFS=: 00:05:38.115 13:34:40 -- accel/accel.sh@19 -- # read -r var val 00:05:38.115 13:34:40 -- accel/accel.sh@20 -- # val= 00:05:38.115 13:34:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.115 13:34:40 -- accel/accel.sh@19 -- # IFS=: 00:05:38.115 13:34:40 -- accel/accel.sh@19 -- # read -r var val 00:05:38.115 13:34:40 -- accel/accel.sh@20 -- # val= 00:05:38.115 13:34:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.115 13:34:40 -- accel/accel.sh@19 -- # IFS=: 00:05:38.115 13:34:40 -- accel/accel.sh@19 -- # read -r var val 00:05:38.115 13:34:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.115 13:34:40 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:38.115 13:34:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.115 00:05:38.115 real 0m1.537s 00:05:38.115 user 0m1.381s 00:05:38.115 sys 0m0.157s 00:05:38.115 13:34:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.115 13:34:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.115 ************************************ 00:05:38.115 END TEST accel_deomp_full_mthread 00:05:38.116 ************************************ 00:05:38.116 13:34:40 -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:38.116 13:34:40 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:38.116 13:34:40 -- accel/accel.sh@137 -- # build_accel_config 00:05:38.116 13:34:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:38.116 13:34:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.116 13:34:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.116 13:34:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.116 13:34:40 -- common/autotest_common.sh@10 -- # set +x 00:05:38.116 13:34:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.116 13:34:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.116 13:34:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.116 13:34:40 -- accel/accel.sh@40 -- # local IFS=, 00:05:38.116 13:34:40 -- accel/accel.sh@41 -- # jq -r . 00:05:38.116 ************************************ 00:05:38.116 START TEST accel_dif_functional_tests 00:05:38.116 ************************************ 00:05:38.116 13:34:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:38.116 [2024-04-18 13:34:40.883156] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:38.116 [2024-04-18 13:34:40.883252] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499814 ] 00:05:38.116 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.374 [2024-04-18 13:34:40.951267] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.374 [2024-04-18 13:34:41.078508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.374 [2024-04-18 13:34:41.078561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.374 [2024-04-18 13:34:41.078565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.374 00:05:38.374 00:05:38.374 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.374 http://cunit.sourceforge.net/ 00:05:38.374 00:05:38.374 00:05:38.374 Suite: accel_dif 00:05:38.374 Test: verify: DIF generated, GUARD check ...passed 00:05:38.374 Test: verify: DIF generated, APPTAG check ...passed 00:05:38.374 Test: verify: DIF generated, REFTAG check ...passed 00:05:38.374 Test: verify: DIF not generated, GUARD check ...[2024-04-18 13:34:41.173920] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:38.374 [2024-04-18 13:34:41.173990] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:38.374 passed 00:05:38.374 Test: verify: DIF not generated, APPTAG check ...[2024-04-18 13:34:41.174028] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:38.374 [2024-04-18 13:34:41.174061] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:38.374 passed 00:05:38.374 Test: verify: DIF not generated, REFTAG check ...[2024-04-18 13:34:41.174091] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:38.374 [2024-04-18 13:34:41.174119] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:38.374 passed 00:05:38.374 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:38.374 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-18 13:34:41.174201] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:38.374 passed 00:05:38.374 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:38.374 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:38.374 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:38.374 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-18 13:34:41.174360] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:38.374 passed 00:05:38.374 Test: generate copy: DIF generated, GUARD check ...passed 00:05:38.374 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:38.374 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:38.374 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:38.374 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:38.374 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:38.374 Test: generate copy: iovecs-len validate ...[2024-04-18 13:34:41.174624] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:38.374 passed 00:05:38.374 Test: generate copy: buffer alignment validate ...passed 00:05:38.374 00:05:38.374 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.374 suites 1 1 n/a 0 0 00:05:38.374 tests 20 20 20 0 0 00:05:38.374 asserts 204 204 204 0 n/a 00:05:38.374 00:05:38.374 Elapsed time = 0.002 seconds 00:05:38.633 00:05:38.633 real 0m0.591s 00:05:38.633 user 0m0.828s 00:05:38.633 sys 0m0.191s 00:05:38.633 13:34:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.633 13:34:41 -- common/autotest_common.sh@10 -- # set +x 00:05:38.633 ************************************ 00:05:38.633 END TEST accel_dif_functional_tests 00:05:38.633 ************************************ 00:05:38.892 00:05:38.892 real 0m36.197s 00:05:38.892 user 0m38.423s 00:05:38.892 sys 0m5.706s 00:05:38.892 13:34:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.892 13:34:41 -- common/autotest_common.sh@10 -- # set +x 00:05:38.892 ************************************ 00:05:38.892 END TEST accel 00:05:38.892 ************************************ 00:05:38.892 13:34:41 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:38.892 13:34:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.892 13:34:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.892 13:34:41 -- common/autotest_common.sh@10 -- # set +x 00:05:38.892 ************************************ 00:05:38.892 START TEST accel_rpc 00:05:38.892 ************************************ 00:05:38.892 13:34:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:38.892 * Looking for test storage... 00:05:38.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:38.892 13:34:41 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:38.892 13:34:41 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2500009 00:05:38.892 13:34:41 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:38.892 13:34:41 -- accel/accel_rpc.sh@15 -- # waitforlisten 2500009 00:05:38.892 13:34:41 -- common/autotest_common.sh@817 -- # '[' -z 2500009 ']' 00:05:38.892 13:34:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.892 13:34:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:38.892 13:34:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.892 13:34:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:38.892 13:34:41 -- common/autotest_common.sh@10 -- # set +x 00:05:38.892 [2024-04-18 13:34:41.682890] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:38.892 [2024-04-18 13:34:41.682999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2500009 ] 00:05:39.151 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.151 [2024-04-18 13:34:41.746284] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.151 [2024-04-18 13:34:41.865070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.085 13:34:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:40.085 13:34:42 -- common/autotest_common.sh@850 -- # return 0 00:05:40.085 13:34:42 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:40.085 13:34:42 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:40.085 13:34:42 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:40.085 13:34:42 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:40.085 13:34:42 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:40.085 13:34:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.085 13:34:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.085 13:34:42 -- common/autotest_common.sh@10 -- # set +x 00:05:40.085 ************************************ 00:05:40.085 START TEST accel_assign_opcode 00:05:40.085 ************************************ 00:05:40.085 13:34:42 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:05:40.085 13:34:42 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:40.085 13:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:40.085 13:34:42 -- common/autotest_common.sh@10 -- # set +x 00:05:40.085 [2024-04-18 13:34:42.751819] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:40.085 13:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:40.085 13:34:42 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:40.085 13:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:40.085 13:34:42 -- common/autotest_common.sh@10 -- # set +x 00:05:40.085 [2024-04-18 13:34:42.759824] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:40.085 13:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:40.085 13:34:42 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:40.085 13:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:40.085 13:34:42 -- common/autotest_common.sh@10 -- # set +x 00:05:40.376 13:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:40.376 13:34:43 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:40.376 13:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:40.376 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:05:40.376 13:34:43 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:40.376 13:34:43 -- accel/accel_rpc.sh@42 -- # grep software 00:05:40.376 13:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:40.376 software 00:05:40.376 00:05:40.376 real 0m0.323s 00:05:40.376 user 0m0.042s 00:05:40.376 sys 0m0.008s 00:05:40.376 13:34:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.376 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:05:40.376 ************************************ 00:05:40.376 END TEST accel_assign_opcode 00:05:40.376 ************************************ 00:05:40.376 13:34:43 -- accel/accel_rpc.sh@55 -- # killprocess 2500009 00:05:40.376 13:34:43 -- common/autotest_common.sh@936 -- # '[' -z 2500009 ']' 00:05:40.376 13:34:43 -- common/autotest_common.sh@940 -- # kill -0 2500009 00:05:40.376 13:34:43 -- common/autotest_common.sh@941 -- # uname 00:05:40.376 13:34:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.376 13:34:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2500009 00:05:40.376 13:34:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:40.376 13:34:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:40.376 13:34:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2500009' 00:05:40.376 killing process with pid 2500009 00:05:40.376 13:34:43 -- common/autotest_common.sh@955 -- # kill 2500009 00:05:40.376 13:34:43 -- common/autotest_common.sh@960 -- # wait 2500009 00:05:40.943 00:05:40.943 real 0m2.012s 00:05:40.943 user 0m2.166s 00:05:40.943 sys 0m0.521s 00:05:40.943 13:34:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.943 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:05:40.943 ************************************ 00:05:40.943 END TEST accel_rpc 00:05:40.943 ************************************ 00:05:40.943 13:34:43 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:40.943 13:34:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.943 13:34:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.943 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:05:40.943 ************************************ 00:05:40.943 START TEST app_cmdline 00:05:40.943 ************************************ 00:05:40.943 13:34:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:41.204 * Looking for test storage... 00:05:41.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:41.204 13:34:43 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:41.204 13:34:43 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2500357 00:05:41.204 13:34:43 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:41.204 13:34:43 -- app/cmdline.sh@18 -- # waitforlisten 2500357 00:05:41.204 13:34:43 -- common/autotest_common.sh@817 -- # '[' -z 2500357 ']' 00:05:41.204 13:34:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.204 13:34:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:41.204 13:34:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.204 13:34:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:41.204 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:05:41.204 [2024-04-18 13:34:43.835246] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:05:41.204 [2024-04-18 13:34:43.835327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2500357 ] 00:05:41.204 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.204 [2024-04-18 13:34:43.893779] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.204 [2024-04-18 13:34:43.999676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.773 13:34:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:41.773 13:34:44 -- common/autotest_common.sh@850 -- # return 0 00:05:41.773 13:34:44 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:41.773 { 00:05:41.773 "version": "SPDK v24.05-pre git sha1 65b4e17c6", 00:05:41.773 "fields": { 00:05:41.773 "major": 24, 00:05:41.773 "minor": 5, 00:05:41.773 "patch": 0, 00:05:41.773 "suffix": "-pre", 00:05:41.773 "commit": "65b4e17c6" 00:05:41.773 } 00:05:41.773 } 00:05:41.773 13:34:44 -- app/cmdline.sh@22 -- # expected_methods=() 00:05:41.773 13:34:44 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:41.773 13:34:44 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:41.773 13:34:44 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:41.773 13:34:44 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:41.773 13:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:41.773 13:34:44 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:41.773 13:34:44 -- app/cmdline.sh@26 -- # sort 00:05:41.773 13:34:44 -- common/autotest_common.sh@10 -- # set +x 00:05:41.773 13:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.032 13:34:44 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:42.032 13:34:44 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:42.032 13:34:44 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:42.032 13:34:44 -- common/autotest_common.sh@638 -- # local es=0 00:05:42.032 13:34:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:42.032 13:34:44 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:42.032 13:34:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:42.032 13:34:44 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:42.032 13:34:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:42.032 13:34:44 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:42.032 13:34:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:42.032 13:34:44 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:42.032 13:34:44 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:42.032 13:34:44 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:42.032 request: 00:05:42.032 { 00:05:42.032 "method": "env_dpdk_get_mem_stats", 00:05:42.032 "req_id": 1 00:05:42.032 } 00:05:42.032 Got JSON-RPC error response 00:05:42.032 response: 00:05:42.032 { 00:05:42.032 "code": -32601, 00:05:42.032 "message": "Method not found" 00:05:42.032 } 00:05:42.290 13:34:44 -- common/autotest_common.sh@641 -- # es=1 00:05:42.290 13:34:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:42.290 13:34:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:42.290 13:34:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:42.290 13:34:44 -- app/cmdline.sh@1 -- # killprocess 2500357 00:05:42.290 13:34:44 -- common/autotest_common.sh@936 -- # '[' -z 2500357 ']' 00:05:42.290 13:34:44 -- common/autotest_common.sh@940 -- # kill -0 2500357 00:05:42.290 13:34:44 -- common/autotest_common.sh@941 -- # uname 00:05:42.290 13:34:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.290 13:34:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2500357 00:05:42.290 13:34:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.290 13:34:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.290 13:34:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2500357' 00:05:42.290 killing process with pid 2500357 00:05:42.290 13:34:44 -- common/autotest_common.sh@955 -- # kill 2500357 00:05:42.290 13:34:44 -- common/autotest_common.sh@960 -- # wait 2500357 00:05:42.857 00:05:42.857 real 0m1.628s 00:05:42.857 user 0m1.954s 00:05:42.857 sys 0m0.488s 00:05:42.857 13:34:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.857 13:34:45 -- common/autotest_common.sh@10 -- # set +x 00:05:42.857 ************************************ 00:05:42.857 END TEST app_cmdline 00:05:42.858 ************************************ 00:05:42.858 13:34:45 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:42.858 13:34:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.858 13:34:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.858 13:34:45 -- common/autotest_common.sh@10 -- # set +x 00:05:42.858 ************************************ 00:05:42.858 START TEST version 00:05:42.858 ************************************ 00:05:42.858 13:34:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:42.858 * Looking for test storage... 00:05:42.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:42.858 13:34:45 -- app/version.sh@17 -- # get_header_version major 00:05:42.858 13:34:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:42.858 13:34:45 -- app/version.sh@14 -- # cut -f2 00:05:42.858 13:34:45 -- app/version.sh@14 -- # tr -d '"' 00:05:42.858 13:34:45 -- app/version.sh@17 -- # major=24 00:05:42.858 13:34:45 -- app/version.sh@18 -- # get_header_version minor 00:05:42.858 13:34:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:42.858 13:34:45 -- app/version.sh@14 -- # cut -f2 00:05:42.858 13:34:45 -- app/version.sh@14 -- # tr -d '"' 00:05:42.858 13:34:45 -- app/version.sh@18 -- # minor=5 00:05:42.858 13:34:45 -- app/version.sh@19 -- # get_header_version patch 00:05:42.858 13:34:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:42.858 13:34:45 -- app/version.sh@14 -- # cut -f2 00:05:42.858 13:34:45 -- app/version.sh@14 -- # tr -d '"' 00:05:42.858 13:34:45 -- app/version.sh@19 -- # patch=0 00:05:42.858 13:34:45 -- app/version.sh@20 -- # get_header_version suffix 00:05:42.858 13:34:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:42.858 13:34:45 -- app/version.sh@14 -- # cut -f2 00:05:42.858 13:34:45 -- app/version.sh@14 -- # tr -d '"' 00:05:42.858 13:34:45 -- app/version.sh@20 -- # suffix=-pre 00:05:42.858 13:34:45 -- app/version.sh@22 -- # version=24.5 00:05:42.858 13:34:45 -- app/version.sh@25 -- # (( patch != 0 )) 00:05:42.858 13:34:45 -- app/version.sh@28 -- # version=24.5rc0 00:05:42.858 13:34:45 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:42.858 13:34:45 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:42.858 13:34:45 -- app/version.sh@30 -- # py_version=24.5rc0 00:05:42.858 13:34:45 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:05:42.858 00:05:42.858 real 0m0.112s 00:05:42.858 user 0m0.058s 00:05:42.858 sys 0m0.077s 00:05:42.858 13:34:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.858 13:34:45 -- common/autotest_common.sh@10 -- # set +x 00:05:42.858 ************************************ 00:05:42.858 END TEST version 00:05:42.858 ************************************ 00:05:42.858 13:34:45 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:05:42.858 13:34:45 -- spdk/autotest.sh@194 -- # uname -s 00:05:42.858 13:34:45 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:42.858 13:34:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:42.858 13:34:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:42.858 13:34:45 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:42.858 13:34:45 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:05:42.858 13:34:45 -- spdk/autotest.sh@258 -- # timing_exit lib 00:05:42.858 13:34:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:42.858 13:34:45 -- common/autotest_common.sh@10 -- # set +x 00:05:42.858 13:34:45 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:05:42.858 13:34:45 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:05:42.858 13:34:45 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:05:42.858 13:34:45 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:05:42.858 13:34:45 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:05:42.858 13:34:45 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:05:42.858 13:34:45 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:42.858 13:34:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:42.858 13:34:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.858 13:34:45 -- common/autotest_common.sh@10 -- # set +x 00:05:43.118 ************************************ 00:05:43.118 START TEST nvmf_tcp 00:05:43.118 ************************************ 00:05:43.118 13:34:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:43.118 * Looking for test storage... 00:05:43.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:43.118 13:34:45 -- nvmf/nvmf.sh@10 -- # uname -s 00:05:43.118 13:34:45 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:43.118 13:34:45 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.118 13:34:45 -- nvmf/common.sh@7 -- # uname -s 00:05:43.118 13:34:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.118 13:34:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.118 13:34:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.118 13:34:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.118 13:34:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.118 13:34:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.118 13:34:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.118 13:34:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.118 13:34:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.118 13:34:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.118 13:34:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:43.118 13:34:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:43.118 13:34:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.118 13:34:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.118 13:34:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:43.118 13:34:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.118 13:34:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.118 13:34:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.118 13:34:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.118 13:34:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.118 13:34:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.118 13:34:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.118 13:34:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.118 13:34:45 -- paths/export.sh@5 -- # export PATH 00:05:43.118 13:34:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.118 13:34:45 -- nvmf/common.sh@47 -- # : 0 00:05:43.118 13:34:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:43.118 13:34:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:43.118 13:34:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.118 13:34:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.118 13:34:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.118 13:34:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:43.118 13:34:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:43.118 13:34:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:43.118 13:34:45 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:43.118 13:34:45 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:43.118 13:34:45 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:43.118 13:34:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:43.118 13:34:45 -- common/autotest_common.sh@10 -- # set +x 00:05:43.118 13:34:45 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:43.118 13:34:45 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:43.118 13:34:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:43.118 13:34:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.118 13:34:45 -- common/autotest_common.sh@10 -- # set +x 00:05:43.118 ************************************ 00:05:43.118 START TEST nvmf_example 00:05:43.118 ************************************ 00:05:43.118 13:34:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:43.378 * Looking for test storage... 00:05:43.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:43.378 13:34:45 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.378 13:34:45 -- nvmf/common.sh@7 -- # uname -s 00:05:43.378 13:34:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.378 13:34:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.378 13:34:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.378 13:34:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.378 13:34:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.378 13:34:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.378 13:34:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.378 13:34:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.378 13:34:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.378 13:34:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.378 13:34:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:43.378 13:34:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:43.378 13:34:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.378 13:34:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.378 13:34:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:43.378 13:34:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.378 13:34:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.378 13:34:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.378 13:34:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.378 13:34:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.378 13:34:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.378 13:34:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.378 13:34:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.378 13:34:45 -- paths/export.sh@5 -- # export PATH 00:05:43.378 13:34:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.378 13:34:45 -- nvmf/common.sh@47 -- # : 0 00:05:43.378 13:34:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:43.378 13:34:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:43.378 13:34:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.378 13:34:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.378 13:34:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.378 13:34:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:43.378 13:34:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:43.378 13:34:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:43.378 13:34:45 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:43.378 13:34:45 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:43.378 13:34:45 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:43.378 13:34:45 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:43.378 13:34:45 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:43.378 13:34:45 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:43.378 13:34:45 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:43.378 13:34:45 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:43.378 13:34:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:43.378 13:34:45 -- common/autotest_common.sh@10 -- # set +x 00:05:43.378 13:34:45 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:43.378 13:34:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:05:43.378 13:34:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:43.378 13:34:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:05:43.378 13:34:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:05:43.378 13:34:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:05:43.378 13:34:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:43.378 13:34:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:43.378 13:34:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:43.378 13:34:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:05:43.378 13:34:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:05:43.378 13:34:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:05:43.378 13:34:45 -- common/autotest_common.sh@10 -- # set +x 00:05:45.282 13:34:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:45.282 13:34:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:05:45.282 13:34:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:45.282 13:34:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:45.282 13:34:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:45.282 13:34:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:45.282 13:34:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:45.282 13:34:47 -- nvmf/common.sh@295 -- # net_devs=() 00:05:45.282 13:34:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:45.282 13:34:47 -- nvmf/common.sh@296 -- # e810=() 00:05:45.282 13:34:47 -- nvmf/common.sh@296 -- # local -ga e810 00:05:45.282 13:34:47 -- nvmf/common.sh@297 -- # x722=() 00:05:45.282 13:34:47 -- nvmf/common.sh@297 -- # local -ga x722 00:05:45.282 13:34:47 -- nvmf/common.sh@298 -- # mlx=() 00:05:45.282 13:34:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:05:45.282 13:34:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:45.282 13:34:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:45.282 13:34:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:45.282 13:34:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:45.282 13:34:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:45.282 13:34:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:45.282 13:34:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:45.282 13:34:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:45.282 13:34:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:45.282 13:34:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:45.282 13:34:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:45.282 13:34:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:45.282 13:34:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:45.283 13:34:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:45.283 13:34:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:45.283 13:34:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:05:45.283 Found 0000:84:00.0 (0x8086 - 0x159b) 00:05:45.283 13:34:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:45.283 13:34:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:05:45.283 Found 0000:84:00.1 (0x8086 - 0x159b) 00:05:45.283 13:34:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:45.283 13:34:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:45.283 13:34:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:45.283 13:34:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:45.283 13:34:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:45.283 13:34:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:05:45.283 Found net devices under 0000:84:00.0: cvl_0_0 00:05:45.283 13:34:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:45.283 13:34:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:45.283 13:34:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:45.283 13:34:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:45.283 13:34:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:45.283 13:34:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:05:45.283 Found net devices under 0000:84:00.1: cvl_0_1 00:05:45.283 13:34:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:45.283 13:34:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:05:45.283 13:34:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:05:45.283 13:34:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:05:45.283 13:34:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:05:45.283 13:34:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:45.283 13:34:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:45.283 13:34:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:45.283 13:34:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:45.283 13:34:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:45.283 13:34:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:45.283 13:34:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:45.283 13:34:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:45.283 13:34:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:45.283 13:34:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:45.283 13:34:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:45.283 13:34:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:45.283 13:34:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:45.283 13:34:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:45.283 13:34:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:45.283 13:34:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:45.283 13:34:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:45.283 13:34:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:45.283 13:34:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:45.283 13:34:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:45.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:45.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:05:45.283 00:05:45.283 --- 10.0.0.2 ping statistics --- 00:05:45.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:45.283 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:05:45.283 13:34:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:45.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:45.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:05:45.283 00:05:45.283 --- 10.0.0.1 ping statistics --- 00:05:45.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:45.283 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:05:45.283 13:34:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:45.283 13:34:48 -- nvmf/common.sh@411 -- # return 0 00:05:45.283 13:34:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:05:45.283 13:34:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:45.283 13:34:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:05:45.283 13:34:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:05:45.283 13:34:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:45.283 13:34:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:05:45.283 13:34:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:05:45.283 13:34:48 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:45.283 13:34:48 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:45.283 13:34:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:45.283 13:34:48 -- common/autotest_common.sh@10 -- # set +x 00:05:45.283 13:34:48 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:45.283 13:34:48 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:45.283 13:34:48 -- target/nvmf_example.sh@34 -- # nvmfpid=2502426 00:05:45.283 13:34:48 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:45.283 13:34:48 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:45.283 13:34:48 -- target/nvmf_example.sh@36 -- # waitforlisten 2502426 00:05:45.283 13:34:48 -- common/autotest_common.sh@817 -- # '[' -z 2502426 ']' 00:05:45.283 13:34:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.283 13:34:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:45.283 13:34:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.283 13:34:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:45.283 13:34:48 -- common/autotest_common.sh@10 -- # set +x 00:05:45.543 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.480 13:34:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:46.480 13:34:49 -- common/autotest_common.sh@850 -- # return 0 00:05:46.480 13:34:49 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:46.480 13:34:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:46.480 13:34:49 -- common/autotest_common.sh@10 -- # set +x 00:05:46.480 13:34:49 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:46.480 13:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.480 13:34:49 -- common/autotest_common.sh@10 -- # set +x 00:05:46.480 13:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.480 13:34:49 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:46.480 13:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.480 13:34:49 -- common/autotest_common.sh@10 -- # set +x 00:05:46.480 13:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.480 13:34:49 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:46.480 13:34:49 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:46.480 13:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.480 13:34:49 -- common/autotest_common.sh@10 -- # set +x 00:05:46.480 13:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.480 13:34:49 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:46.480 13:34:49 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:46.480 13:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.480 13:34:49 -- common/autotest_common.sh@10 -- # set +x 00:05:46.480 13:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.480 13:34:49 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:46.480 13:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.480 13:34:49 -- common/autotest_common.sh@10 -- # set +x 00:05:46.480 13:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.480 13:34:49 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:46.480 13:34:49 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:46.480 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.693 Initializing NVMe Controllers 00:05:58.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:58.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:58.693 Initialization complete. Launching workers. 00:05:58.693 ======================================================== 00:05:58.693 Latency(us) 00:05:58.693 Device Information : IOPS MiB/s Average min max 00:05:58.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14902.47 58.21 4294.37 753.85 19190.39 00:05:58.693 ======================================================== 00:05:58.693 Total : 14902.47 58.21 4294.37 753.85 19190.39 00:05:58.693 00:05:58.693 13:34:59 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:58.693 13:34:59 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:58.693 13:34:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:05:58.693 13:34:59 -- nvmf/common.sh@117 -- # sync 00:05:58.693 13:34:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:58.693 13:34:59 -- nvmf/common.sh@120 -- # set +e 00:05:58.693 13:34:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:58.693 13:34:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:58.693 rmmod nvme_tcp 00:05:58.693 rmmod nvme_fabrics 00:05:58.693 rmmod nvme_keyring 00:05:58.693 13:34:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:58.693 13:34:59 -- nvmf/common.sh@124 -- # set -e 00:05:58.693 13:34:59 -- nvmf/common.sh@125 -- # return 0 00:05:58.693 13:34:59 -- nvmf/common.sh@478 -- # '[' -n 2502426 ']' 00:05:58.693 13:34:59 -- nvmf/common.sh@479 -- # killprocess 2502426 00:05:58.693 13:34:59 -- common/autotest_common.sh@936 -- # '[' -z 2502426 ']' 00:05:58.693 13:34:59 -- common/autotest_common.sh@940 -- # kill -0 2502426 00:05:58.693 13:34:59 -- common/autotest_common.sh@941 -- # uname 00:05:58.693 13:34:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.693 13:34:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2502426 00:05:58.693 13:34:59 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:05:58.693 13:34:59 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:05:58.693 13:34:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2502426' 00:05:58.693 killing process with pid 2502426 00:05:58.693 13:34:59 -- common/autotest_common.sh@955 -- # kill 2502426 00:05:58.693 13:34:59 -- common/autotest_common.sh@960 -- # wait 2502426 00:05:58.693 nvmf threads initialize successfully 00:05:58.693 bdev subsystem init successfully 00:05:58.693 created a nvmf target service 00:05:58.693 create targets's poll groups done 00:05:58.693 all subsystems of target started 00:05:58.693 nvmf target is running 00:05:58.693 all subsystems of target stopped 00:05:58.693 destroy targets's poll groups done 00:05:58.693 destroyed the nvmf target service 00:05:58.693 bdev subsystem finish successfully 00:05:58.693 nvmf threads destroy successfully 00:05:58.693 13:34:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:05:58.693 13:34:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:05:58.693 13:34:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:05:58.693 13:34:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:58.693 13:34:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:58.693 13:34:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:58.693 13:34:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:58.693 13:34:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:58.953 13:35:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:58.953 13:35:01 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:58.953 13:35:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:58.953 13:35:01 -- common/autotest_common.sh@10 -- # set +x 00:05:59.214 00:05:59.214 real 0m15.879s 00:05:59.214 user 0m44.872s 00:05:59.214 sys 0m3.531s 00:05:59.214 13:35:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.214 13:35:01 -- common/autotest_common.sh@10 -- # set +x 00:05:59.214 ************************************ 00:05:59.214 END TEST nvmf_example 00:05:59.214 ************************************ 00:05:59.214 13:35:01 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:59.214 13:35:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:59.214 13:35:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.214 13:35:01 -- common/autotest_common.sh@10 -- # set +x 00:05:59.214 ************************************ 00:05:59.214 START TEST nvmf_filesystem 00:05:59.214 ************************************ 00:05:59.214 13:35:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:59.214 * Looking for test storage... 00:05:59.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.214 13:35:01 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:05:59.214 13:35:01 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:59.214 13:35:01 -- common/autotest_common.sh@34 -- # set -e 00:05:59.214 13:35:01 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:59.214 13:35:01 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:59.214 13:35:01 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:05:59.214 13:35:01 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:59.214 13:35:01 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:05:59.214 13:35:01 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:59.214 13:35:01 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:59.214 13:35:01 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:59.214 13:35:01 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:59.214 13:35:01 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:59.214 13:35:01 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:59.214 13:35:01 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:59.214 13:35:01 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:59.214 13:35:01 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:59.214 13:35:01 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:59.214 13:35:01 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:59.214 13:35:01 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:59.214 13:35:01 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:59.214 13:35:01 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:59.214 13:35:01 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:59.214 13:35:01 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:59.214 13:35:01 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:59.214 13:35:01 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:59.214 13:35:01 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:59.214 13:35:01 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:59.214 13:35:01 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:59.214 13:35:01 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:59.214 13:35:01 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:59.214 13:35:01 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:59.214 13:35:01 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:59.214 13:35:01 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:59.214 13:35:01 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:59.214 13:35:01 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:59.214 13:35:01 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:59.214 13:35:01 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:59.214 13:35:01 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:59.214 13:35:01 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:59.215 13:35:01 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:59.215 13:35:01 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:59.215 13:35:01 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:59.215 13:35:01 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:59.215 13:35:01 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:59.215 13:35:01 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:59.215 13:35:01 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:59.215 13:35:01 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:59.215 13:35:01 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:59.215 13:35:01 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:59.215 13:35:01 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:59.215 13:35:01 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:59.215 13:35:01 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:59.215 13:35:01 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:05:59.215 13:35:01 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:05:59.215 13:35:01 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:59.215 13:35:01 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:05:59.215 13:35:01 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:05:59.215 13:35:01 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:05:59.215 13:35:01 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:05:59.215 13:35:01 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:05:59.215 13:35:01 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:05:59.215 13:35:01 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:05:59.215 13:35:01 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:05:59.215 13:35:01 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:05:59.215 13:35:01 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:05:59.215 13:35:01 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:05:59.215 13:35:01 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:05:59.215 13:35:01 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:05:59.215 13:35:01 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:05:59.215 13:35:01 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:05:59.215 13:35:01 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:05:59.215 13:35:01 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:05:59.215 13:35:01 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:05:59.215 13:35:01 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:05:59.215 13:35:01 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:59.215 13:35:01 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:05:59.215 13:35:01 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:05:59.215 13:35:01 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:05:59.215 13:35:01 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:05:59.215 13:35:01 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:05:59.215 13:35:01 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:05:59.215 13:35:01 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:05:59.215 13:35:01 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:05:59.215 13:35:01 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:05:59.215 13:35:01 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:05:59.215 13:35:01 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:05:59.215 13:35:01 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:59.215 13:35:01 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:05:59.215 13:35:01 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:05:59.215 13:35:01 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:59.215 13:35:01 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:59.215 13:35:01 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:59.215 13:35:01 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:59.215 13:35:01 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:59.215 13:35:01 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:59.215 13:35:01 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:59.215 13:35:01 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:59.215 13:35:01 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:59.215 13:35:01 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:59.215 13:35:01 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:59.215 13:35:01 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:59.215 13:35:01 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:59.215 13:35:01 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:59.215 13:35:01 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:05:59.215 13:35:01 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:59.215 #define SPDK_CONFIG_H 00:05:59.215 #define SPDK_CONFIG_APPS 1 00:05:59.215 #define SPDK_CONFIG_ARCH native 00:05:59.215 #undef SPDK_CONFIG_ASAN 00:05:59.215 #undef SPDK_CONFIG_AVAHI 00:05:59.215 #undef SPDK_CONFIG_CET 00:05:59.215 #define SPDK_CONFIG_COVERAGE 1 00:05:59.215 #define SPDK_CONFIG_CROSS_PREFIX 00:05:59.215 #undef SPDK_CONFIG_CRYPTO 00:05:59.215 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:59.215 #undef SPDK_CONFIG_CUSTOMOCF 00:05:59.215 #undef SPDK_CONFIG_DAOS 00:05:59.215 #define SPDK_CONFIG_DAOS_DIR 00:05:59.215 #define SPDK_CONFIG_DEBUG 1 00:05:59.215 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:59.215 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:59.215 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:59.215 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:59.215 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:59.215 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:59.215 #define SPDK_CONFIG_EXAMPLES 1 00:05:59.215 #undef SPDK_CONFIG_FC 00:05:59.215 #define SPDK_CONFIG_FC_PATH 00:05:59.215 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:59.215 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:59.215 #undef SPDK_CONFIG_FUSE 00:05:59.215 #undef SPDK_CONFIG_FUZZER 00:05:59.215 #define SPDK_CONFIG_FUZZER_LIB 00:05:59.215 #undef SPDK_CONFIG_GOLANG 00:05:59.215 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:59.215 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:59.215 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:59.215 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:05:59.215 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:59.215 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:59.215 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:59.215 #define SPDK_CONFIG_IDXD 1 00:05:59.215 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:59.215 #undef SPDK_CONFIG_IPSEC_MB 00:05:59.215 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:59.215 #define SPDK_CONFIG_ISAL 1 00:05:59.215 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:59.215 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:59.215 #define SPDK_CONFIG_LIBDIR 00:05:59.215 #undef SPDK_CONFIG_LTO 00:05:59.215 #define SPDK_CONFIG_MAX_LCORES 00:05:59.215 #define SPDK_CONFIG_NVME_CUSE 1 00:05:59.215 #undef SPDK_CONFIG_OCF 00:05:59.215 #define SPDK_CONFIG_OCF_PATH 00:05:59.215 #define SPDK_CONFIG_OPENSSL_PATH 00:05:59.215 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:59.215 #define SPDK_CONFIG_PGO_DIR 00:05:59.215 #undef SPDK_CONFIG_PGO_USE 00:05:59.215 #define SPDK_CONFIG_PREFIX /usr/local 00:05:59.215 #undef SPDK_CONFIG_RAID5F 00:05:59.215 #undef SPDK_CONFIG_RBD 00:05:59.215 #define SPDK_CONFIG_RDMA 1 00:05:59.215 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:59.215 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:59.215 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:59.215 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:59.215 #define SPDK_CONFIG_SHARED 1 00:05:59.215 #undef SPDK_CONFIG_SMA 00:05:59.215 #define SPDK_CONFIG_TESTS 1 00:05:59.215 #undef SPDK_CONFIG_TSAN 00:05:59.215 #define SPDK_CONFIG_UBLK 1 00:05:59.215 #define SPDK_CONFIG_UBSAN 1 00:05:59.215 #undef SPDK_CONFIG_UNIT_TESTS 00:05:59.215 #undef SPDK_CONFIG_URING 00:05:59.215 #define SPDK_CONFIG_URING_PATH 00:05:59.215 #undef SPDK_CONFIG_URING_ZNS 00:05:59.215 #undef SPDK_CONFIG_USDT 00:05:59.215 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:59.215 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:59.215 #define SPDK_CONFIG_VFIO_USER 1 00:05:59.215 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:59.215 #define SPDK_CONFIG_VHOST 1 00:05:59.215 #define SPDK_CONFIG_VIRTIO 1 00:05:59.215 #undef SPDK_CONFIG_VTUNE 00:05:59.215 #define SPDK_CONFIG_VTUNE_DIR 00:05:59.215 #define SPDK_CONFIG_WERROR 1 00:05:59.215 #define SPDK_CONFIG_WPDK_DIR 00:05:59.215 #undef SPDK_CONFIG_XNVME 00:05:59.215 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:59.215 13:35:01 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:59.215 13:35:01 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:59.215 13:35:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.215 13:35:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.215 13:35:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.215 13:35:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.215 13:35:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.216 13:35:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.216 13:35:01 -- paths/export.sh@5 -- # export PATH 00:05:59.216 13:35:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.216 13:35:01 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:59.216 13:35:01 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:59.216 13:35:01 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:59.216 13:35:01 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:59.216 13:35:01 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:59.216 13:35:01 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:59.216 13:35:01 -- pm/common@67 -- # TEST_TAG=N/A 00:05:59.216 13:35:01 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:05:59.216 13:35:01 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:59.216 13:35:01 -- pm/common@71 -- # uname -s 00:05:59.216 13:35:01 -- pm/common@71 -- # PM_OS=Linux 00:05:59.216 13:35:01 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:59.216 13:35:01 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:05:59.216 13:35:01 -- pm/common@76 -- # [[ Linux == Linux ]] 00:05:59.216 13:35:01 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:05:59.216 13:35:01 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:05:59.216 13:35:01 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:59.216 13:35:01 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:59.216 13:35:01 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:05:59.216 13:35:01 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:05:59.216 13:35:01 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:59.216 13:35:01 -- common/autotest_common.sh@57 -- # : 0 00:05:59.216 13:35:01 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:05:59.216 13:35:01 -- common/autotest_common.sh@61 -- # : 0 00:05:59.216 13:35:01 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:59.216 13:35:01 -- common/autotest_common.sh@63 -- # : 0 00:05:59.216 13:35:01 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:05:59.216 13:35:01 -- common/autotest_common.sh@65 -- # : 1 00:05:59.216 13:35:01 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:59.216 13:35:01 -- common/autotest_common.sh@67 -- # : 0 00:05:59.216 13:35:01 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:05:59.216 13:35:01 -- common/autotest_common.sh@69 -- # : 00:05:59.216 13:35:01 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:05:59.216 13:35:01 -- common/autotest_common.sh@71 -- # : 0 00:05:59.216 13:35:01 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:05:59.216 13:35:01 -- common/autotest_common.sh@73 -- # : 0 00:05:59.216 13:35:01 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:05:59.216 13:35:01 -- common/autotest_common.sh@75 -- # : 0 00:05:59.216 13:35:01 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:05:59.216 13:35:01 -- common/autotest_common.sh@77 -- # : 0 00:05:59.216 13:35:01 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:59.216 13:35:01 -- common/autotest_common.sh@79 -- # : 0 00:05:59.216 13:35:01 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:05:59.216 13:35:01 -- common/autotest_common.sh@81 -- # : 0 00:05:59.216 13:35:01 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:05:59.216 13:35:01 -- common/autotest_common.sh@83 -- # : 0 00:05:59.216 13:35:01 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:05:59.216 13:35:02 -- common/autotest_common.sh@85 -- # : 1 00:05:59.216 13:35:02 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:05:59.216 13:35:02 -- common/autotest_common.sh@87 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:05:59.216 13:35:02 -- common/autotest_common.sh@89 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:05:59.216 13:35:02 -- common/autotest_common.sh@91 -- # : 1 00:05:59.216 13:35:02 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:05:59.216 13:35:02 -- common/autotest_common.sh@93 -- # : 1 00:05:59.216 13:35:02 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:05:59.216 13:35:02 -- common/autotest_common.sh@95 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:59.216 13:35:02 -- common/autotest_common.sh@97 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:05:59.216 13:35:02 -- common/autotest_common.sh@99 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:05:59.216 13:35:02 -- common/autotest_common.sh@101 -- # : tcp 00:05:59.216 13:35:02 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:59.216 13:35:02 -- common/autotest_common.sh@103 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:05:59.216 13:35:02 -- common/autotest_common.sh@105 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:05:59.216 13:35:02 -- common/autotest_common.sh@107 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:05:59.216 13:35:02 -- common/autotest_common.sh@109 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:05:59.216 13:35:02 -- common/autotest_common.sh@111 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:05:59.216 13:35:02 -- common/autotest_common.sh@113 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:05:59.216 13:35:02 -- common/autotest_common.sh@115 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:05:59.216 13:35:02 -- common/autotest_common.sh@117 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:59.216 13:35:02 -- common/autotest_common.sh@119 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:05:59.216 13:35:02 -- common/autotest_common.sh@121 -- # : 1 00:05:59.216 13:35:02 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:05:59.216 13:35:02 -- common/autotest_common.sh@123 -- # : 00:05:59.216 13:35:02 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:59.216 13:35:02 -- common/autotest_common.sh@125 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:05:59.216 13:35:02 -- common/autotest_common.sh@127 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:05:59.216 13:35:02 -- common/autotest_common.sh@129 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:05:59.216 13:35:02 -- common/autotest_common.sh@131 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:05:59.216 13:35:02 -- common/autotest_common.sh@133 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:05:59.216 13:35:02 -- common/autotest_common.sh@135 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:05:59.216 13:35:02 -- common/autotest_common.sh@137 -- # : 00:05:59.216 13:35:02 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:05:59.216 13:35:02 -- common/autotest_common.sh@139 -- # : true 00:05:59.216 13:35:02 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:05:59.216 13:35:02 -- common/autotest_common.sh@141 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:05:59.216 13:35:02 -- common/autotest_common.sh@143 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:05:59.216 13:35:02 -- common/autotest_common.sh@145 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:05:59.216 13:35:02 -- common/autotest_common.sh@147 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:05:59.216 13:35:02 -- common/autotest_common.sh@149 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:05:59.216 13:35:02 -- common/autotest_common.sh@151 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:05:59.216 13:35:02 -- common/autotest_common.sh@153 -- # : e810 00:05:59.216 13:35:02 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:05:59.216 13:35:02 -- common/autotest_common.sh@155 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:05:59.216 13:35:02 -- common/autotest_common.sh@157 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:05:59.216 13:35:02 -- common/autotest_common.sh@159 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:05:59.216 13:35:02 -- common/autotest_common.sh@161 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:05:59.216 13:35:02 -- common/autotest_common.sh@163 -- # : 0 00:05:59.216 13:35:02 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:05:59.217 13:35:02 -- common/autotest_common.sh@166 -- # : 00:05:59.217 13:35:02 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:05:59.217 13:35:02 -- common/autotest_common.sh@168 -- # : 0 00:05:59.217 13:35:02 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:05:59.217 13:35:02 -- common/autotest_common.sh@170 -- # : 0 00:05:59.217 13:35:02 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:59.217 13:35:02 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:59.217 13:35:02 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:59.217 13:35:02 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:59.217 13:35:02 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:59.217 13:35:02 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:59.217 13:35:02 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:59.217 13:35:02 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:59.217 13:35:02 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:59.217 13:35:02 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:59.217 13:35:02 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:59.217 13:35:02 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:59.217 13:35:02 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:59.217 13:35:02 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:59.217 13:35:02 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:05:59.217 13:35:02 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:59.217 13:35:02 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:59.217 13:35:02 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:59.217 13:35:02 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:59.217 13:35:02 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:59.217 13:35:02 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:05:59.217 13:35:02 -- common/autotest_common.sh@199 -- # cat 00:05:59.217 13:35:02 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:05:59.217 13:35:02 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:59.217 13:35:02 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:59.217 13:35:02 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:59.217 13:35:02 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:59.217 13:35:02 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:05:59.217 13:35:02 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:05:59.217 13:35:02 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:59.217 13:35:02 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:59.217 13:35:02 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:59.217 13:35:02 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:59.217 13:35:02 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:59.217 13:35:02 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:59.217 13:35:02 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:59.217 13:35:02 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:59.217 13:35:02 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:59.217 13:35:02 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:59.217 13:35:02 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:59.217 13:35:02 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:59.217 13:35:02 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:05:59.476 13:35:02 -- common/autotest_common.sh@252 -- # export valgrind= 00:05:59.476 13:35:02 -- common/autotest_common.sh@252 -- # valgrind= 00:05:59.476 13:35:02 -- common/autotest_common.sh@258 -- # uname -s 00:05:59.476 13:35:02 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:05:59.476 13:35:02 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:05:59.476 13:35:02 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:05:59.476 13:35:02 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:05:59.476 13:35:02 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:05:59.476 13:35:02 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:05:59.476 13:35:02 -- common/autotest_common.sh@268 -- # MAKE=make 00:05:59.476 13:35:02 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:05:59.476 13:35:02 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:05:59.476 13:35:02 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:05:59.476 13:35:02 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:05:59.476 13:35:02 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:05:59.476 13:35:02 -- common/autotest_common.sh@289 -- # for i in "$@" 00:05:59.476 13:35:02 -- common/autotest_common.sh@290 -- # case "$i" in 00:05:59.476 13:35:02 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:05:59.476 13:35:02 -- common/autotest_common.sh@307 -- # [[ -z 2504137 ]] 00:05:59.476 13:35:02 -- common/autotest_common.sh@307 -- # kill -0 2504137 00:05:59.476 13:35:02 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:05:59.476 13:35:02 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:05:59.476 13:35:02 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:05:59.476 13:35:02 -- common/autotest_common.sh@320 -- # local mount target_dir 00:05:59.476 13:35:02 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:05:59.476 13:35:02 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:05:59.476 13:35:02 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:05:59.476 13:35:02 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:05:59.476 13:35:02 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.6j6ETG 00:05:59.476 13:35:02 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:59.476 13:35:02 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:05:59.476 13:35:02 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:05:59.476 13:35:02 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.6j6ETG/tests/target /tmp/spdk.6j6ETG 00:05:59.476 13:35:02 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:05:59.476 13:35:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:59.476 13:35:02 -- common/autotest_common.sh@316 -- # df -T 00:05:59.476 13:35:02 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:05:59.476 13:35:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:05:59.476 13:35:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:05:59.476 13:35:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:05:59.476 13:35:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:05:59.476 13:35:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:05:59.476 13:35:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:59.476 13:35:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:05:59.476 13:35:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:05:59.476 13:35:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=996237312 00:05:59.476 13:35:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:05:59.476 13:35:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=4288192512 00:05:59.476 13:35:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:59.476 13:35:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:05:59.476 13:35:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:05:59.476 13:35:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=37229850624 00:05:59.476 13:35:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=45083308032 00:05:59.476 13:35:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=7853457408 00:05:59.476 13:35:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:59.476 13:35:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:59.476 13:35:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:59.476 13:35:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=22488113152 00:05:59.476 13:35:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=22541651968 00:05:59.476 13:35:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=53538816 00:05:59.476 13:35:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:59.476 13:35:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:59.476 13:35:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:59.476 13:35:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=9007878144 00:05:59.476 13:35:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=9016664064 00:05:59.476 13:35:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=8785920 00:05:59.476 13:35:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:59.476 13:35:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:59.476 13:35:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:59.476 13:35:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=22541115392 00:05:59.476 13:35:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=22541656064 00:05:59.476 13:35:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=540672 00:05:59.476 13:35:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:59.476 13:35:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:59.476 13:35:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:59.476 13:35:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=4508323840 00:05:59.476 13:35:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4508327936 00:05:59.476 13:35:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:05:59.476 13:35:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:59.476 13:35:02 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:05:59.476 * Looking for test storage... 00:05:59.476 13:35:02 -- common/autotest_common.sh@357 -- # local target_space new_size 00:05:59.476 13:35:02 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:05:59.476 13:35:02 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.476 13:35:02 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:59.476 13:35:02 -- common/autotest_common.sh@361 -- # mount=/ 00:05:59.476 13:35:02 -- common/autotest_common.sh@363 -- # target_space=37229850624 00:05:59.476 13:35:02 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:05:59.476 13:35:02 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:05:59.476 13:35:02 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:05:59.476 13:35:02 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:05:59.476 13:35:02 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:05:59.476 13:35:02 -- common/autotest_common.sh@370 -- # new_size=10068049920 00:05:59.476 13:35:02 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:59.476 13:35:02 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.476 13:35:02 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.476 13:35:02 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.476 13:35:02 -- common/autotest_common.sh@378 -- # return 0 00:05:59.476 13:35:02 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:05:59.476 13:35:02 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:05:59.476 13:35:02 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:59.476 13:35:02 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:59.476 13:35:02 -- common/autotest_common.sh@1673 -- # true 00:05:59.476 13:35:02 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:05:59.476 13:35:02 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:59.476 13:35:02 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:59.476 13:35:02 -- common/autotest_common.sh@27 -- # exec 00:05:59.476 13:35:02 -- common/autotest_common.sh@29 -- # exec 00:05:59.476 13:35:02 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:59.476 13:35:02 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:59.477 13:35:02 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:59.477 13:35:02 -- common/autotest_common.sh@18 -- # set -x 00:05:59.477 13:35:02 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.477 13:35:02 -- nvmf/common.sh@7 -- # uname -s 00:05:59.477 13:35:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.477 13:35:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.477 13:35:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.477 13:35:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.477 13:35:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.477 13:35:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.477 13:35:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.477 13:35:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.477 13:35:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.477 13:35:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.477 13:35:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:59.477 13:35:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:59.477 13:35:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.477 13:35:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.477 13:35:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:59.477 13:35:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.477 13:35:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:59.477 13:35:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.477 13:35:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.477 13:35:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.477 13:35:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.477 13:35:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.477 13:35:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.477 13:35:02 -- paths/export.sh@5 -- # export PATH 00:05:59.477 13:35:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.477 13:35:02 -- nvmf/common.sh@47 -- # : 0 00:05:59.477 13:35:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:59.477 13:35:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:59.477 13:35:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.477 13:35:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.477 13:35:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.477 13:35:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:59.477 13:35:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:59.477 13:35:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:59.477 13:35:02 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:59.477 13:35:02 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:59.477 13:35:02 -- target/filesystem.sh@15 -- # nvmftestinit 00:05:59.477 13:35:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:05:59.477 13:35:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:59.477 13:35:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:05:59.477 13:35:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:05:59.477 13:35:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:05:59.477 13:35:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:59.477 13:35:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:59.477 13:35:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:59.477 13:35:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:05:59.477 13:35:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:05:59.477 13:35:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:05:59.477 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:06:01.379 13:35:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:01.379 13:35:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:01.379 13:35:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:01.379 13:35:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:01.379 13:35:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:01.379 13:35:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:01.379 13:35:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:01.379 13:35:04 -- nvmf/common.sh@295 -- # net_devs=() 00:06:01.379 13:35:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:01.379 13:35:04 -- nvmf/common.sh@296 -- # e810=() 00:06:01.379 13:35:04 -- nvmf/common.sh@296 -- # local -ga e810 00:06:01.379 13:35:04 -- nvmf/common.sh@297 -- # x722=() 00:06:01.379 13:35:04 -- nvmf/common.sh@297 -- # local -ga x722 00:06:01.379 13:35:04 -- nvmf/common.sh@298 -- # mlx=() 00:06:01.380 13:35:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:01.380 13:35:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:01.380 13:35:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:01.380 13:35:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:01.380 13:35:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:01.380 13:35:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:01.380 13:35:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:01.380 13:35:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:01.380 13:35:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:01.380 13:35:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:01.380 13:35:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:01.380 13:35:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:01.380 13:35:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:01.380 13:35:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:01.380 13:35:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:01.380 13:35:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:01.380 13:35:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:01.380 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:01.380 13:35:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:01.380 13:35:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:01.380 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:01.380 13:35:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:01.380 13:35:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:01.380 13:35:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:01.380 13:35:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:01.380 13:35:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:01.380 13:35:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:01.380 Found net devices under 0000:84:00.0: cvl_0_0 00:06:01.380 13:35:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:01.380 13:35:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:01.380 13:35:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:01.380 13:35:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:01.380 13:35:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:01.380 13:35:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:01.380 Found net devices under 0000:84:00.1: cvl_0_1 00:06:01.380 13:35:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:01.380 13:35:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:01.380 13:35:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:01.380 13:35:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:01.380 13:35:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:01.380 13:35:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:01.380 13:35:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:01.380 13:35:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:01.380 13:35:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:01.380 13:35:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:01.380 13:35:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:01.380 13:35:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:01.380 13:35:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:01.380 13:35:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:01.380 13:35:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:01.380 13:35:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:01.380 13:35:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:01.380 13:35:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:01.380 13:35:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:01.380 13:35:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:01.380 13:35:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:01.380 13:35:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:01.380 13:35:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:01.638 13:35:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:01.638 13:35:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:01.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:01.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:06:01.638 00:06:01.638 --- 10.0.0.2 ping statistics --- 00:06:01.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:01.638 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:06:01.638 13:35:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:01.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:01.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:06:01.638 00:06:01.638 --- 10.0.0.1 ping statistics --- 00:06:01.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:01.638 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:06:01.638 13:35:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:01.638 13:35:04 -- nvmf/common.sh@411 -- # return 0 00:06:01.638 13:35:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:01.638 13:35:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:01.638 13:35:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:01.638 13:35:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:01.638 13:35:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:01.638 13:35:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:01.638 13:35:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:01.638 13:35:04 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:01.638 13:35:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:01.638 13:35:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.638 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:06:01.638 ************************************ 00:06:01.638 START TEST nvmf_filesystem_no_in_capsule 00:06:01.638 ************************************ 00:06:01.638 13:35:04 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:06:01.638 13:35:04 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:01.638 13:35:04 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:01.638 13:35:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:01.638 13:35:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:01.638 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:06:01.638 13:35:04 -- nvmf/common.sh@470 -- # nvmfpid=2505791 00:06:01.638 13:35:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:01.638 13:35:04 -- nvmf/common.sh@471 -- # waitforlisten 2505791 00:06:01.638 13:35:04 -- common/autotest_common.sh@817 -- # '[' -z 2505791 ']' 00:06:01.638 13:35:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.638 13:35:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:01.638 13:35:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.638 13:35:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:01.638 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:06:01.638 [2024-04-18 13:35:04.376947] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:01.638 [2024-04-18 13:35:04.377027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:01.638 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.895 [2024-04-18 13:35:04.446346] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.895 [2024-04-18 13:35:04.568161] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:01.895 [2024-04-18 13:35:04.568248] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:01.895 [2024-04-18 13:35:04.568265] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:01.895 [2024-04-18 13:35:04.568278] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:01.895 [2024-04-18 13:35:04.568290] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:01.895 [2024-04-18 13:35:04.568360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.895 [2024-04-18 13:35:04.568413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.895 [2024-04-18 13:35:04.568473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.895 [2024-04-18 13:35:04.568477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.828 13:35:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:02.828 13:35:05 -- common/autotest_common.sh@850 -- # return 0 00:06:02.828 13:35:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:02.828 13:35:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:02.828 13:35:05 -- common/autotest_common.sh@10 -- # set +x 00:06:02.828 13:35:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:02.828 13:35:05 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:02.828 13:35:05 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:02.828 13:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.828 13:35:05 -- common/autotest_common.sh@10 -- # set +x 00:06:02.828 [2024-04-18 13:35:05.374149] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.828 13:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.828 13:35:05 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:02.828 13:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.828 13:35:05 -- common/autotest_common.sh@10 -- # set +x 00:06:02.828 Malloc1 00:06:02.828 13:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.828 13:35:05 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:02.828 13:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.828 13:35:05 -- common/autotest_common.sh@10 -- # set +x 00:06:02.828 13:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.828 13:35:05 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:02.828 13:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.828 13:35:05 -- common/autotest_common.sh@10 -- # set +x 00:06:02.828 13:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.828 13:35:05 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:02.828 13:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.828 13:35:05 -- common/autotest_common.sh@10 -- # set +x 00:06:02.828 [2024-04-18 13:35:05.562735] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:02.828 13:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.828 13:35:05 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:02.828 13:35:05 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:02.829 13:35:05 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:02.829 13:35:05 -- common/autotest_common.sh@1366 -- # local bs 00:06:02.829 13:35:05 -- common/autotest_common.sh@1367 -- # local nb 00:06:02.829 13:35:05 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:02.829 13:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.829 13:35:05 -- common/autotest_common.sh@10 -- # set +x 00:06:02.829 13:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.829 13:35:05 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:02.829 { 00:06:02.829 "name": "Malloc1", 00:06:02.829 "aliases": [ 00:06:02.829 "b3b59411-41b8-4381-b08b-466f80fc119a" 00:06:02.829 ], 00:06:02.829 "product_name": "Malloc disk", 00:06:02.829 "block_size": 512, 00:06:02.829 "num_blocks": 1048576, 00:06:02.829 "uuid": "b3b59411-41b8-4381-b08b-466f80fc119a", 00:06:02.829 "assigned_rate_limits": { 00:06:02.829 "rw_ios_per_sec": 0, 00:06:02.829 "rw_mbytes_per_sec": 0, 00:06:02.829 "r_mbytes_per_sec": 0, 00:06:02.829 "w_mbytes_per_sec": 0 00:06:02.829 }, 00:06:02.829 "claimed": true, 00:06:02.829 "claim_type": "exclusive_write", 00:06:02.829 "zoned": false, 00:06:02.829 "supported_io_types": { 00:06:02.829 "read": true, 00:06:02.829 "write": true, 00:06:02.829 "unmap": true, 00:06:02.829 "write_zeroes": true, 00:06:02.829 "flush": true, 00:06:02.829 "reset": true, 00:06:02.829 "compare": false, 00:06:02.829 "compare_and_write": false, 00:06:02.829 "abort": true, 00:06:02.829 "nvme_admin": false, 00:06:02.829 "nvme_io": false 00:06:02.829 }, 00:06:02.829 "memory_domains": [ 00:06:02.829 { 00:06:02.829 "dma_device_id": "system", 00:06:02.829 "dma_device_type": 1 00:06:02.829 }, 00:06:02.829 { 00:06:02.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.829 "dma_device_type": 2 00:06:02.829 } 00:06:02.829 ], 00:06:02.829 "driver_specific": {} 00:06:02.829 } 00:06:02.829 ]' 00:06:02.829 13:35:05 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:02.829 13:35:05 -- common/autotest_common.sh@1369 -- # bs=512 00:06:02.829 13:35:05 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:03.094 13:35:05 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:03.094 13:35:05 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:03.094 13:35:05 -- common/autotest_common.sh@1374 -- # echo 512 00:06:03.094 13:35:05 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:03.094 13:35:05 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:03.687 13:35:06 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:03.687 13:35:06 -- common/autotest_common.sh@1184 -- # local i=0 00:06:03.687 13:35:06 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:03.687 13:35:06 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:03.687 13:35:06 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:05.591 13:35:08 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:05.591 13:35:08 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:05.591 13:35:08 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:05.591 13:35:08 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:05.591 13:35:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:05.591 13:35:08 -- common/autotest_common.sh@1194 -- # return 0 00:06:05.591 13:35:08 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:05.591 13:35:08 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:05.591 13:35:08 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:05.591 13:35:08 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:05.591 13:35:08 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:05.591 13:35:08 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:05.591 13:35:08 -- setup/common.sh@80 -- # echo 536870912 00:06:05.591 13:35:08 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:05.591 13:35:08 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:05.591 13:35:08 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:05.591 13:35:08 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:05.850 13:35:08 -- target/filesystem.sh@69 -- # partprobe 00:06:06.110 13:35:08 -- target/filesystem.sh@70 -- # sleep 1 00:06:07.043 13:35:09 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:07.043 13:35:09 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:07.043 13:35:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:07.043 13:35:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.043 13:35:09 -- common/autotest_common.sh@10 -- # set +x 00:06:07.303 ************************************ 00:06:07.303 START TEST filesystem_ext4 00:06:07.303 ************************************ 00:06:07.303 13:35:09 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:07.303 13:35:09 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:07.303 13:35:09 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:07.303 13:35:09 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:07.303 13:35:09 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:07.303 13:35:09 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:07.303 13:35:09 -- common/autotest_common.sh@914 -- # local i=0 00:06:07.303 13:35:09 -- common/autotest_common.sh@915 -- # local force 00:06:07.303 13:35:09 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:07.303 13:35:09 -- common/autotest_common.sh@918 -- # force=-F 00:06:07.303 13:35:09 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:07.303 mke2fs 1.46.5 (30-Dec-2021) 00:06:07.303 Discarding device blocks: 0/522240 done 00:06:07.303 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:07.303 Filesystem UUID: 500266b9-04a4-4341-8f87-0287cf5d20a6 00:06:07.303 Superblock backups stored on blocks: 00:06:07.303 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:07.303 00:06:07.303 Allocating group tables: 0/64 done 00:06:07.303 Writing inode tables: 0/64 done 00:06:07.559 Creating journal (8192 blocks): done 00:06:08.383 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:06:08.383 00:06:08.383 13:35:11 -- common/autotest_common.sh@931 -- # return 0 00:06:08.383 13:35:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:09.320 13:35:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:09.320 13:35:11 -- target/filesystem.sh@25 -- # sync 00:06:09.320 13:35:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:09.320 13:35:11 -- target/filesystem.sh@27 -- # sync 00:06:09.320 13:35:11 -- target/filesystem.sh@29 -- # i=0 00:06:09.320 13:35:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:09.320 13:35:11 -- target/filesystem.sh@37 -- # kill -0 2505791 00:06:09.320 13:35:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:09.320 13:35:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:09.320 13:35:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:09.320 13:35:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:09.320 00:06:09.320 real 0m2.057s 00:06:09.320 user 0m0.010s 00:06:09.320 sys 0m0.034s 00:06:09.320 13:35:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.320 13:35:11 -- common/autotest_common.sh@10 -- # set +x 00:06:09.320 ************************************ 00:06:09.320 END TEST filesystem_ext4 00:06:09.320 ************************************ 00:06:09.320 13:35:11 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:09.320 13:35:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:09.320 13:35:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.320 13:35:11 -- common/autotest_common.sh@10 -- # set +x 00:06:09.320 ************************************ 00:06:09.320 START TEST filesystem_btrfs 00:06:09.320 ************************************ 00:06:09.320 13:35:12 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:09.320 13:35:12 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:09.320 13:35:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:09.320 13:35:12 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:09.320 13:35:12 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:09.320 13:35:12 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:09.320 13:35:12 -- common/autotest_common.sh@914 -- # local i=0 00:06:09.320 13:35:12 -- common/autotest_common.sh@915 -- # local force 00:06:09.320 13:35:12 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:09.320 13:35:12 -- common/autotest_common.sh@920 -- # force=-f 00:06:09.320 13:35:12 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:09.889 btrfs-progs v6.6.2 00:06:09.889 See https://btrfs.readthedocs.io for more information. 00:06:09.889 00:06:09.889 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:09.889 NOTE: several default settings have changed in version 5.15, please make sure 00:06:09.889 this does not affect your deployments: 00:06:09.889 - DUP for metadata (-m dup) 00:06:09.889 - enabled no-holes (-O no-holes) 00:06:09.889 - enabled free-space-tree (-R free-space-tree) 00:06:09.889 00:06:09.889 Label: (null) 00:06:09.889 UUID: 1428590b-61b7-468a-871b-7e48934df5b0 00:06:09.889 Node size: 16384 00:06:09.889 Sector size: 4096 00:06:09.889 Filesystem size: 510.00MiB 00:06:09.889 Block group profiles: 00:06:09.889 Data: single 8.00MiB 00:06:09.889 Metadata: DUP 32.00MiB 00:06:09.889 System: DUP 8.00MiB 00:06:09.889 SSD detected: yes 00:06:09.889 Zoned device: no 00:06:09.889 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:09.889 Runtime features: free-space-tree 00:06:09.889 Checksum: crc32c 00:06:09.889 Number of devices: 1 00:06:09.889 Devices: 00:06:09.889 ID SIZE PATH 00:06:09.889 1 510.00MiB /dev/nvme0n1p1 00:06:09.889 00:06:09.889 13:35:12 -- common/autotest_common.sh@931 -- # return 0 00:06:09.889 13:35:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:10.826 13:35:13 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:10.826 13:35:13 -- target/filesystem.sh@25 -- # sync 00:06:10.826 13:35:13 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:10.826 13:35:13 -- target/filesystem.sh@27 -- # sync 00:06:10.826 13:35:13 -- target/filesystem.sh@29 -- # i=0 00:06:10.826 13:35:13 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:10.826 13:35:13 -- target/filesystem.sh@37 -- # kill -0 2505791 00:06:10.826 13:35:13 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:10.826 13:35:13 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:10.826 13:35:13 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:10.826 13:35:13 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:10.826 00:06:10.826 real 0m1.343s 00:06:10.826 user 0m0.014s 00:06:10.826 sys 0m0.047s 00:06:10.826 13:35:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.826 13:35:13 -- common/autotest_common.sh@10 -- # set +x 00:06:10.826 ************************************ 00:06:10.826 END TEST filesystem_btrfs 00:06:10.826 ************************************ 00:06:10.826 13:35:13 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:10.826 13:35:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:10.826 13:35:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.826 13:35:13 -- common/autotest_common.sh@10 -- # set +x 00:06:10.826 ************************************ 00:06:10.826 START TEST filesystem_xfs 00:06:10.826 ************************************ 00:06:10.826 13:35:13 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:10.826 13:35:13 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:10.826 13:35:13 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:10.826 13:35:13 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:10.826 13:35:13 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:10.826 13:35:13 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:10.826 13:35:13 -- common/autotest_common.sh@914 -- # local i=0 00:06:10.826 13:35:13 -- common/autotest_common.sh@915 -- # local force 00:06:10.826 13:35:13 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:10.826 13:35:13 -- common/autotest_common.sh@920 -- # force=-f 00:06:10.826 13:35:13 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:11.085 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:11.085 = sectsz=512 attr=2, projid32bit=1 00:06:11.085 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:11.085 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:11.085 data = bsize=4096 blocks=130560, imaxpct=25 00:06:11.085 = sunit=0 swidth=0 blks 00:06:11.085 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:11.085 log =internal log bsize=4096 blocks=16384, version=2 00:06:11.085 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:11.085 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:11.651 Discarding blocks...Done. 00:06:11.651 13:35:14 -- common/autotest_common.sh@931 -- # return 0 00:06:11.651 13:35:14 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:13.552 13:35:16 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:13.552 13:35:16 -- target/filesystem.sh@25 -- # sync 00:06:13.552 13:35:16 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:13.552 13:35:16 -- target/filesystem.sh@27 -- # sync 00:06:13.552 13:35:16 -- target/filesystem.sh@29 -- # i=0 00:06:13.552 13:35:16 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:13.552 13:35:16 -- target/filesystem.sh@37 -- # kill -0 2505791 00:06:13.552 13:35:16 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:13.552 13:35:16 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:13.552 13:35:16 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:13.552 13:35:16 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:13.552 00:06:13.552 real 0m2.743s 00:06:13.552 user 0m0.018s 00:06:13.552 sys 0m0.039s 00:06:13.552 13:35:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.552 13:35:16 -- common/autotest_common.sh@10 -- # set +x 00:06:13.552 ************************************ 00:06:13.552 END TEST filesystem_xfs 00:06:13.552 ************************************ 00:06:13.552 13:35:16 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:13.552 13:35:16 -- target/filesystem.sh@93 -- # sync 00:06:13.552 13:35:16 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:13.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:13.810 13:35:16 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:13.810 13:35:16 -- common/autotest_common.sh@1205 -- # local i=0 00:06:13.810 13:35:16 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:13.810 13:35:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:13.810 13:35:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:13.810 13:35:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:13.810 13:35:16 -- common/autotest_common.sh@1217 -- # return 0 00:06:13.810 13:35:16 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:13.810 13:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.810 13:35:16 -- common/autotest_common.sh@10 -- # set +x 00:06:13.810 13:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.810 13:35:16 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:13.810 13:35:16 -- target/filesystem.sh@101 -- # killprocess 2505791 00:06:13.810 13:35:16 -- common/autotest_common.sh@936 -- # '[' -z 2505791 ']' 00:06:13.810 13:35:16 -- common/autotest_common.sh@940 -- # kill -0 2505791 00:06:13.810 13:35:16 -- common/autotest_common.sh@941 -- # uname 00:06:13.810 13:35:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.810 13:35:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2505791 00:06:13.810 13:35:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.810 13:35:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.810 13:35:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2505791' 00:06:13.810 killing process with pid 2505791 00:06:13.810 13:35:16 -- common/autotest_common.sh@955 -- # kill 2505791 00:06:13.810 13:35:16 -- common/autotest_common.sh@960 -- # wait 2505791 00:06:14.379 13:35:17 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:14.379 00:06:14.379 real 0m12.685s 00:06:14.379 user 0m48.861s 00:06:14.379 sys 0m1.808s 00:06:14.379 13:35:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.379 13:35:17 -- common/autotest_common.sh@10 -- # set +x 00:06:14.379 ************************************ 00:06:14.379 END TEST nvmf_filesystem_no_in_capsule 00:06:14.379 ************************************ 00:06:14.379 13:35:17 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:14.379 13:35:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:14.379 13:35:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.379 13:35:17 -- common/autotest_common.sh@10 -- # set +x 00:06:14.379 ************************************ 00:06:14.379 START TEST nvmf_filesystem_in_capsule 00:06:14.379 ************************************ 00:06:14.379 13:35:17 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:06:14.379 13:35:17 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:14.379 13:35:17 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:14.379 13:35:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:14.379 13:35:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:14.379 13:35:17 -- common/autotest_common.sh@10 -- # set +x 00:06:14.379 13:35:17 -- nvmf/common.sh@470 -- # nvmfpid=2507520 00:06:14.379 13:35:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:14.379 13:35:17 -- nvmf/common.sh@471 -- # waitforlisten 2507520 00:06:14.379 13:35:17 -- common/autotest_common.sh@817 -- # '[' -z 2507520 ']' 00:06:14.379 13:35:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.379 13:35:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:14.379 13:35:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.379 13:35:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:14.379 13:35:17 -- common/autotest_common.sh@10 -- # set +x 00:06:14.639 [2024-04-18 13:35:17.193768] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:14.639 [2024-04-18 13:35:17.193849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.639 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.639 [2024-04-18 13:35:17.266517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.639 [2024-04-18 13:35:17.389720] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:14.639 [2024-04-18 13:35:17.389789] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:14.639 [2024-04-18 13:35:17.389805] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.639 [2024-04-18 13:35:17.389819] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.639 [2024-04-18 13:35:17.389832] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:14.639 [2024-04-18 13:35:17.389900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.639 [2024-04-18 13:35:17.389955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.639 [2024-04-18 13:35:17.393199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.639 [2024-04-18 13:35:17.393212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.899 13:35:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:14.899 13:35:17 -- common/autotest_common.sh@850 -- # return 0 00:06:14.899 13:35:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:14.899 13:35:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:14.899 13:35:17 -- common/autotest_common.sh@10 -- # set +x 00:06:14.899 13:35:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:14.899 13:35:17 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:14.899 13:35:17 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:14.899 13:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:14.899 13:35:17 -- common/autotest_common.sh@10 -- # set +x 00:06:14.899 [2024-04-18 13:35:17.551908] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.899 13:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:14.899 13:35:17 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:14.899 13:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:14.899 13:35:17 -- common/autotest_common.sh@10 -- # set +x 00:06:15.158 Malloc1 00:06:15.158 13:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:15.158 13:35:17 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:15.158 13:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:15.158 13:35:17 -- common/autotest_common.sh@10 -- # set +x 00:06:15.158 13:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:15.158 13:35:17 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:15.158 13:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:15.158 13:35:17 -- common/autotest_common.sh@10 -- # set +x 00:06:15.158 13:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:15.158 13:35:17 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.158 13:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:15.158 13:35:17 -- common/autotest_common.sh@10 -- # set +x 00:06:15.158 [2024-04-18 13:35:17.737837] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.158 13:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:15.158 13:35:17 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:15.158 13:35:17 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:15.158 13:35:17 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:15.158 13:35:17 -- common/autotest_common.sh@1366 -- # local bs 00:06:15.158 13:35:17 -- common/autotest_common.sh@1367 -- # local nb 00:06:15.158 13:35:17 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:15.158 13:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:15.158 13:35:17 -- common/autotest_common.sh@10 -- # set +x 00:06:15.158 13:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:15.158 13:35:17 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:15.158 { 00:06:15.158 "name": "Malloc1", 00:06:15.158 "aliases": [ 00:06:15.158 "980c0f49-0a27-4db5-98a5-2300bbfbe58c" 00:06:15.158 ], 00:06:15.159 "product_name": "Malloc disk", 00:06:15.159 "block_size": 512, 00:06:15.159 "num_blocks": 1048576, 00:06:15.159 "uuid": "980c0f49-0a27-4db5-98a5-2300bbfbe58c", 00:06:15.159 "assigned_rate_limits": { 00:06:15.159 "rw_ios_per_sec": 0, 00:06:15.159 "rw_mbytes_per_sec": 0, 00:06:15.159 "r_mbytes_per_sec": 0, 00:06:15.159 "w_mbytes_per_sec": 0 00:06:15.159 }, 00:06:15.159 "claimed": true, 00:06:15.159 "claim_type": "exclusive_write", 00:06:15.159 "zoned": false, 00:06:15.159 "supported_io_types": { 00:06:15.159 "read": true, 00:06:15.159 "write": true, 00:06:15.159 "unmap": true, 00:06:15.159 "write_zeroes": true, 00:06:15.159 "flush": true, 00:06:15.159 "reset": true, 00:06:15.159 "compare": false, 00:06:15.159 "compare_and_write": false, 00:06:15.159 "abort": true, 00:06:15.159 "nvme_admin": false, 00:06:15.159 "nvme_io": false 00:06:15.159 }, 00:06:15.159 "memory_domains": [ 00:06:15.159 { 00:06:15.159 "dma_device_id": "system", 00:06:15.159 "dma_device_type": 1 00:06:15.159 }, 00:06:15.159 { 00:06:15.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.159 "dma_device_type": 2 00:06:15.159 } 00:06:15.159 ], 00:06:15.159 "driver_specific": {} 00:06:15.159 } 00:06:15.159 ]' 00:06:15.159 13:35:17 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:15.159 13:35:17 -- common/autotest_common.sh@1369 -- # bs=512 00:06:15.159 13:35:17 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:15.159 13:35:17 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:15.159 13:35:17 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:15.159 13:35:17 -- common/autotest_common.sh@1374 -- # echo 512 00:06:15.159 13:35:17 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:15.159 13:35:17 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:15.727 13:35:18 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:15.727 13:35:18 -- common/autotest_common.sh@1184 -- # local i=0 00:06:15.727 13:35:18 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:15.727 13:35:18 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:15.727 13:35:18 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:17.627 13:35:20 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:17.627 13:35:20 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:17.627 13:35:20 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:17.627 13:35:20 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:17.627 13:35:20 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:17.627 13:35:20 -- common/autotest_common.sh@1194 -- # return 0 00:06:17.627 13:35:20 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:17.627 13:35:20 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:17.627 13:35:20 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:17.627 13:35:20 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:17.627 13:35:20 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:17.627 13:35:20 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:17.627 13:35:20 -- setup/common.sh@80 -- # echo 536870912 00:06:17.627 13:35:20 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:17.627 13:35:20 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:17.627 13:35:20 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:17.627 13:35:20 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:18.191 13:35:20 -- target/filesystem.sh@69 -- # partprobe 00:06:18.847 13:35:21 -- target/filesystem.sh@70 -- # sleep 1 00:06:19.779 13:35:22 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:19.779 13:35:22 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:19.779 13:35:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:19.779 13:35:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.779 13:35:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.038 ************************************ 00:06:20.038 START TEST filesystem_in_capsule_ext4 00:06:20.038 ************************************ 00:06:20.038 13:35:22 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:20.038 13:35:22 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:20.038 13:35:22 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:20.038 13:35:22 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:20.038 13:35:22 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:20.038 13:35:22 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:20.038 13:35:22 -- common/autotest_common.sh@914 -- # local i=0 00:06:20.038 13:35:22 -- common/autotest_common.sh@915 -- # local force 00:06:20.038 13:35:22 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:20.038 13:35:22 -- common/autotest_common.sh@918 -- # force=-F 00:06:20.038 13:35:22 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:20.038 mke2fs 1.46.5 (30-Dec-2021) 00:06:20.038 Discarding device blocks: 0/522240 done 00:06:20.038 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:20.038 Filesystem UUID: 8ecbd8ac-91af-443b-a61c-b2fe0eaf93d8 00:06:20.038 Superblock backups stored on blocks: 00:06:20.038 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:20.038 00:06:20.038 Allocating group tables: 0/64 done 00:06:20.038 Writing inode tables: 0/64 done 00:06:20.297 Creating journal (8192 blocks): done 00:06:20.297 Writing superblocks and filesystem accounting information: 0/64 done 00:06:20.297 00:06:20.297 13:35:22 -- common/autotest_common.sh@931 -- # return 0 00:06:20.297 13:35:22 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:20.865 13:35:23 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:20.865 13:35:23 -- target/filesystem.sh@25 -- # sync 00:06:20.865 13:35:23 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:20.865 13:35:23 -- target/filesystem.sh@27 -- # sync 00:06:20.865 13:35:23 -- target/filesystem.sh@29 -- # i=0 00:06:20.865 13:35:23 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:20.865 13:35:23 -- target/filesystem.sh@37 -- # kill -0 2507520 00:06:20.865 13:35:23 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:20.865 13:35:23 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:20.865 13:35:23 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:20.865 13:35:23 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:20.865 00:06:20.865 real 0m0.885s 00:06:20.865 user 0m0.015s 00:06:20.865 sys 0m0.033s 00:06:20.865 13:35:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.865 13:35:23 -- common/autotest_common.sh@10 -- # set +x 00:06:20.865 ************************************ 00:06:20.865 END TEST filesystem_in_capsule_ext4 00:06:20.865 ************************************ 00:06:20.865 13:35:23 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:20.865 13:35:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:20.865 13:35:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.865 13:35:23 -- common/autotest_common.sh@10 -- # set +x 00:06:20.865 ************************************ 00:06:20.865 START TEST filesystem_in_capsule_btrfs 00:06:20.865 ************************************ 00:06:20.865 13:35:23 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:20.865 13:35:23 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:20.865 13:35:23 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:20.865 13:35:23 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:20.865 13:35:23 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:20.865 13:35:23 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:20.865 13:35:23 -- common/autotest_common.sh@914 -- # local i=0 00:06:20.865 13:35:23 -- common/autotest_common.sh@915 -- # local force 00:06:20.865 13:35:23 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:20.865 13:35:23 -- common/autotest_common.sh@920 -- # force=-f 00:06:20.865 13:35:23 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:21.431 btrfs-progs v6.6.2 00:06:21.431 See https://btrfs.readthedocs.io for more information. 00:06:21.431 00:06:21.431 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:21.431 NOTE: several default settings have changed in version 5.15, please make sure 00:06:21.431 this does not affect your deployments: 00:06:21.431 - DUP for metadata (-m dup) 00:06:21.431 - enabled no-holes (-O no-holes) 00:06:21.431 - enabled free-space-tree (-R free-space-tree) 00:06:21.431 00:06:21.431 Label: (null) 00:06:21.431 UUID: c9198ce8-2370-4721-9062-456061bb0133 00:06:21.431 Node size: 16384 00:06:21.431 Sector size: 4096 00:06:21.431 Filesystem size: 510.00MiB 00:06:21.431 Block group profiles: 00:06:21.431 Data: single 8.00MiB 00:06:21.431 Metadata: DUP 32.00MiB 00:06:21.431 System: DUP 8.00MiB 00:06:21.431 SSD detected: yes 00:06:21.431 Zoned device: no 00:06:21.431 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:21.431 Runtime features: free-space-tree 00:06:21.431 Checksum: crc32c 00:06:21.431 Number of devices: 1 00:06:21.431 Devices: 00:06:21.432 ID SIZE PATH 00:06:21.432 1 510.00MiB /dev/nvme0n1p1 00:06:21.432 00:06:21.432 13:35:24 -- common/autotest_common.sh@931 -- # return 0 00:06:21.432 13:35:24 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:21.689 13:35:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:21.689 13:35:24 -- target/filesystem.sh@25 -- # sync 00:06:21.689 13:35:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:21.689 13:35:24 -- target/filesystem.sh@27 -- # sync 00:06:21.689 13:35:24 -- target/filesystem.sh@29 -- # i=0 00:06:21.689 13:35:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:21.689 13:35:24 -- target/filesystem.sh@37 -- # kill -0 2507520 00:06:21.689 13:35:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:21.689 13:35:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:21.689 13:35:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:21.689 13:35:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:21.689 00:06:21.689 real 0m0.807s 00:06:21.689 user 0m0.015s 00:06:21.689 sys 0m0.046s 00:06:21.689 13:35:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.689 13:35:24 -- common/autotest_common.sh@10 -- # set +x 00:06:21.689 ************************************ 00:06:21.689 END TEST filesystem_in_capsule_btrfs 00:06:21.689 ************************************ 00:06:21.689 13:35:24 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:21.689 13:35:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:21.689 13:35:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.689 13:35:24 -- common/autotest_common.sh@10 -- # set +x 00:06:21.948 ************************************ 00:06:21.948 START TEST filesystem_in_capsule_xfs 00:06:21.948 ************************************ 00:06:21.948 13:35:24 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:21.948 13:35:24 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:21.948 13:35:24 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:21.948 13:35:24 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:21.948 13:35:24 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:21.948 13:35:24 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:21.949 13:35:24 -- common/autotest_common.sh@914 -- # local i=0 00:06:21.949 13:35:24 -- common/autotest_common.sh@915 -- # local force 00:06:21.949 13:35:24 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:21.949 13:35:24 -- common/autotest_common.sh@920 -- # force=-f 00:06:21.949 13:35:24 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:21.949 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:21.949 = sectsz=512 attr=2, projid32bit=1 00:06:21.949 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:21.949 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:21.949 data = bsize=4096 blocks=130560, imaxpct=25 00:06:21.949 = sunit=0 swidth=0 blks 00:06:21.949 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:21.949 log =internal log bsize=4096 blocks=16384, version=2 00:06:21.949 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:21.949 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:22.883 Discarding blocks...Done. 00:06:22.883 13:35:25 -- common/autotest_common.sh@931 -- # return 0 00:06:22.883 13:35:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:24.788 13:35:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:24.788 13:35:27 -- target/filesystem.sh@25 -- # sync 00:06:24.788 13:35:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:24.788 13:35:27 -- target/filesystem.sh@27 -- # sync 00:06:24.788 13:35:27 -- target/filesystem.sh@29 -- # i=0 00:06:24.788 13:35:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:24.788 13:35:27 -- target/filesystem.sh@37 -- # kill -0 2507520 00:06:24.788 13:35:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:24.788 13:35:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:24.788 13:35:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:24.788 13:35:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:24.788 00:06:24.788 real 0m2.614s 00:06:24.788 user 0m0.011s 00:06:24.788 sys 0m0.045s 00:06:24.788 13:35:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:24.788 13:35:27 -- common/autotest_common.sh@10 -- # set +x 00:06:24.788 ************************************ 00:06:24.788 END TEST filesystem_in_capsule_xfs 00:06:24.788 ************************************ 00:06:24.788 13:35:27 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:24.788 13:35:27 -- target/filesystem.sh@93 -- # sync 00:06:24.788 13:35:27 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:24.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:24.788 13:35:27 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:24.788 13:35:27 -- common/autotest_common.sh@1205 -- # local i=0 00:06:24.788 13:35:27 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:24.788 13:35:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:24.788 13:35:27 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:24.788 13:35:27 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:24.788 13:35:27 -- common/autotest_common.sh@1217 -- # return 0 00:06:24.788 13:35:27 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:24.788 13:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:24.788 13:35:27 -- common/autotest_common.sh@10 -- # set +x 00:06:24.788 13:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:24.788 13:35:27 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:24.788 13:35:27 -- target/filesystem.sh@101 -- # killprocess 2507520 00:06:24.788 13:35:27 -- common/autotest_common.sh@936 -- # '[' -z 2507520 ']' 00:06:24.788 13:35:27 -- common/autotest_common.sh@940 -- # kill -0 2507520 00:06:24.788 13:35:27 -- common/autotest_common.sh@941 -- # uname 00:06:24.788 13:35:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:24.788 13:35:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2507520 00:06:24.788 13:35:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:24.788 13:35:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:24.788 13:35:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2507520' 00:06:24.788 killing process with pid 2507520 00:06:24.788 13:35:27 -- common/autotest_common.sh@955 -- # kill 2507520 00:06:24.788 13:35:27 -- common/autotest_common.sh@960 -- # wait 2507520 00:06:25.357 13:35:27 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:25.357 00:06:25.357 real 0m10.775s 00:06:25.357 user 0m41.173s 00:06:25.357 sys 0m1.649s 00:06:25.357 13:35:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.357 13:35:27 -- common/autotest_common.sh@10 -- # set +x 00:06:25.357 ************************************ 00:06:25.357 END TEST nvmf_filesystem_in_capsule 00:06:25.357 ************************************ 00:06:25.357 13:35:27 -- target/filesystem.sh@108 -- # nvmftestfini 00:06:25.357 13:35:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:25.357 13:35:27 -- nvmf/common.sh@117 -- # sync 00:06:25.357 13:35:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:25.357 13:35:27 -- nvmf/common.sh@120 -- # set +e 00:06:25.357 13:35:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:25.357 13:35:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:25.357 rmmod nvme_tcp 00:06:25.357 rmmod nvme_fabrics 00:06:25.357 rmmod nvme_keyring 00:06:25.357 13:35:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:25.357 13:35:27 -- nvmf/common.sh@124 -- # set -e 00:06:25.357 13:35:27 -- nvmf/common.sh@125 -- # return 0 00:06:25.357 13:35:27 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:06:25.357 13:35:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:25.357 13:35:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:25.357 13:35:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:25.357 13:35:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:25.357 13:35:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:25.357 13:35:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.357 13:35:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:25.357 13:35:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.261 13:35:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:27.261 00:06:27.261 real 0m28.129s 00:06:27.261 user 1m31.006s 00:06:27.261 sys 0m5.134s 00:06:27.261 13:35:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.261 13:35:30 -- common/autotest_common.sh@10 -- # set +x 00:06:27.261 ************************************ 00:06:27.261 END TEST nvmf_filesystem 00:06:27.261 ************************************ 00:06:27.261 13:35:30 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:27.261 13:35:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:27.261 13:35:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.261 13:35:30 -- common/autotest_common.sh@10 -- # set +x 00:06:27.520 ************************************ 00:06:27.520 START TEST nvmf_discovery 00:06:27.520 ************************************ 00:06:27.520 13:35:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:27.520 * Looking for test storage... 00:06:27.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.520 13:35:30 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.520 13:35:30 -- nvmf/common.sh@7 -- # uname -s 00:06:27.520 13:35:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.520 13:35:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.520 13:35:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.520 13:35:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.520 13:35:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.520 13:35:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.520 13:35:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.520 13:35:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.520 13:35:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.520 13:35:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.520 13:35:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:27.520 13:35:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:27.520 13:35:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.520 13:35:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.520 13:35:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:27.520 13:35:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.520 13:35:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.520 13:35:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.520 13:35:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.520 13:35:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.520 13:35:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.520 13:35:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.520 13:35:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.520 13:35:30 -- paths/export.sh@5 -- # export PATH 00:06:27.520 13:35:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.520 13:35:30 -- nvmf/common.sh@47 -- # : 0 00:06:27.520 13:35:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:27.520 13:35:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:27.520 13:35:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.520 13:35:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.520 13:35:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.520 13:35:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:27.520 13:35:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:27.520 13:35:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:27.520 13:35:30 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:27.520 13:35:30 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:27.520 13:35:30 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:27.520 13:35:30 -- target/discovery.sh@15 -- # hash nvme 00:06:27.520 13:35:30 -- target/discovery.sh@20 -- # nvmftestinit 00:06:27.520 13:35:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:27.520 13:35:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.520 13:35:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:27.520 13:35:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:27.520 13:35:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:27.520 13:35:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.520 13:35:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:27.520 13:35:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.520 13:35:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:27.520 13:35:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:27.520 13:35:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:27.520 13:35:30 -- common/autotest_common.sh@10 -- # set +x 00:06:29.426 13:35:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:29.426 13:35:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:29.426 13:35:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:29.426 13:35:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:29.426 13:35:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:29.426 13:35:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:29.426 13:35:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:29.426 13:35:32 -- nvmf/common.sh@295 -- # net_devs=() 00:06:29.426 13:35:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:29.426 13:35:32 -- nvmf/common.sh@296 -- # e810=() 00:06:29.426 13:35:32 -- nvmf/common.sh@296 -- # local -ga e810 00:06:29.426 13:35:32 -- nvmf/common.sh@297 -- # x722=() 00:06:29.426 13:35:32 -- nvmf/common.sh@297 -- # local -ga x722 00:06:29.426 13:35:32 -- nvmf/common.sh@298 -- # mlx=() 00:06:29.426 13:35:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:29.426 13:35:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:29.426 13:35:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:29.426 13:35:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:29.426 13:35:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:29.426 13:35:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:29.426 13:35:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:29.426 13:35:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:29.426 13:35:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:29.426 13:35:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:29.426 13:35:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:29.426 13:35:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:29.426 13:35:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:29.426 13:35:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:29.426 13:35:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:29.426 13:35:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:29.426 13:35:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:29.426 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:29.426 13:35:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:29.426 13:35:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:29.426 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:29.426 13:35:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:29.426 13:35:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:29.426 13:35:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.426 13:35:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:29.426 13:35:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.426 13:35:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:29.426 Found net devices under 0000:84:00.0: cvl_0_0 00:06:29.426 13:35:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.426 13:35:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:29.426 13:35:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.426 13:35:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:29.426 13:35:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.426 13:35:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:29.426 Found net devices under 0000:84:00.1: cvl_0_1 00:06:29.426 13:35:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.426 13:35:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:29.426 13:35:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:29.426 13:35:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:29.426 13:35:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:29.426 13:35:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:29.426 13:35:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:29.426 13:35:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:29.426 13:35:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:29.426 13:35:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:29.426 13:35:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:29.426 13:35:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:29.426 13:35:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:29.426 13:35:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:29.426 13:35:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:29.426 13:35:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:29.426 13:35:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:29.426 13:35:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:29.685 13:35:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:29.685 13:35:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:29.685 13:35:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:29.685 13:35:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:29.685 13:35:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:29.685 13:35:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:29.685 13:35:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:29.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:29.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:06:29.685 00:06:29.685 --- 10.0.0.2 ping statistics --- 00:06:29.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.685 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:06:29.685 13:35:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:29.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:29.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:06:29.685 00:06:29.685 --- 10.0.0.1 ping statistics --- 00:06:29.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.685 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:06:29.685 13:35:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:29.685 13:35:32 -- nvmf/common.sh@411 -- # return 0 00:06:29.685 13:35:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:29.685 13:35:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:29.685 13:35:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:29.685 13:35:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:29.685 13:35:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:29.685 13:35:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:29.685 13:35:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:29.685 13:35:32 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:29.685 13:35:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:29.685 13:35:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:29.685 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:29.685 13:35:32 -- nvmf/common.sh@470 -- # nvmfpid=2510913 00:06:29.685 13:35:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:29.685 13:35:32 -- nvmf/common.sh@471 -- # waitforlisten 2510913 00:06:29.685 13:35:32 -- common/autotest_common.sh@817 -- # '[' -z 2510913 ']' 00:06:29.685 13:35:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.685 13:35:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:29.685 13:35:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.685 13:35:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:29.685 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:29.685 [2024-04-18 13:35:32.420907] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:29.685 [2024-04-18 13:35:32.420984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.685 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.685 [2024-04-18 13:35:32.486727] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.942 [2024-04-18 13:35:32.597351] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:29.942 [2024-04-18 13:35:32.597413] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:29.942 [2024-04-18 13:35:32.597426] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:29.942 [2024-04-18 13:35:32.597438] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:29.942 [2024-04-18 13:35:32.597448] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:29.942 [2024-04-18 13:35:32.597515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.942 [2024-04-18 13:35:32.597573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.942 [2024-04-18 13:35:32.597638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.942 [2024-04-18 13:35:32.597640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.942 13:35:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:29.942 13:35:32 -- common/autotest_common.sh@850 -- # return 0 00:06:29.942 13:35:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:29.942 13:35:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:29.942 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 13:35:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:30.202 13:35:32 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 [2024-04-18 13:35:32.756036] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@26 -- # seq 1 4 00:06:30.202 13:35:32 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:30.202 13:35:32 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 Null1 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 [2024-04-18 13:35:32.796405] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:30.202 13:35:32 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 Null2 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:30.202 13:35:32 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 Null3 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:30.202 13:35:32 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 Null4 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:30.202 13:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.202 13:35:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 13:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.202 13:35:32 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:06:30.462 00:06:30.462 Discovery Log Number of Records 6, Generation counter 6 00:06:30.462 =====Discovery Log Entry 0====== 00:06:30.462 trtype: tcp 00:06:30.462 adrfam: ipv4 00:06:30.462 subtype: current discovery subsystem 00:06:30.462 treq: not required 00:06:30.462 portid: 0 00:06:30.462 trsvcid: 4420 00:06:30.462 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:30.462 traddr: 10.0.0.2 00:06:30.462 eflags: explicit discovery connections, duplicate discovery information 00:06:30.462 sectype: none 00:06:30.462 =====Discovery Log Entry 1====== 00:06:30.462 trtype: tcp 00:06:30.462 adrfam: ipv4 00:06:30.462 subtype: nvme subsystem 00:06:30.462 treq: not required 00:06:30.462 portid: 0 00:06:30.462 trsvcid: 4420 00:06:30.462 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:30.462 traddr: 10.0.0.2 00:06:30.462 eflags: none 00:06:30.462 sectype: none 00:06:30.462 =====Discovery Log Entry 2====== 00:06:30.462 trtype: tcp 00:06:30.462 adrfam: ipv4 00:06:30.462 subtype: nvme subsystem 00:06:30.462 treq: not required 00:06:30.462 portid: 0 00:06:30.462 trsvcid: 4420 00:06:30.462 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:30.462 traddr: 10.0.0.2 00:06:30.462 eflags: none 00:06:30.462 sectype: none 00:06:30.462 =====Discovery Log Entry 3====== 00:06:30.462 trtype: tcp 00:06:30.462 adrfam: ipv4 00:06:30.462 subtype: nvme subsystem 00:06:30.462 treq: not required 00:06:30.462 portid: 0 00:06:30.462 trsvcid: 4420 00:06:30.462 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:30.462 traddr: 10.0.0.2 00:06:30.462 eflags: none 00:06:30.462 sectype: none 00:06:30.462 =====Discovery Log Entry 4====== 00:06:30.462 trtype: tcp 00:06:30.462 adrfam: ipv4 00:06:30.462 subtype: nvme subsystem 00:06:30.462 treq: not required 00:06:30.462 portid: 0 00:06:30.462 trsvcid: 4420 00:06:30.462 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:30.462 traddr: 10.0.0.2 00:06:30.462 eflags: none 00:06:30.462 sectype: none 00:06:30.462 =====Discovery Log Entry 5====== 00:06:30.462 trtype: tcp 00:06:30.462 adrfam: ipv4 00:06:30.462 subtype: discovery subsystem referral 00:06:30.462 treq: not required 00:06:30.462 portid: 0 00:06:30.462 trsvcid: 4430 00:06:30.462 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:30.462 traddr: 10.0.0.2 00:06:30.462 eflags: none 00:06:30.463 sectype: none 00:06:30.463 13:35:33 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:30.463 Perform nvmf subsystem discovery via RPC 00:06:30.463 13:35:33 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:30.463 13:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.463 13:35:33 -- common/autotest_common.sh@10 -- # set +x 00:06:30.463 [2024-04-18 13:35:33.020851] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:06:30.463 [ 00:06:30.463 { 00:06:30.463 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:30.463 "subtype": "Discovery", 00:06:30.463 "listen_addresses": [ 00:06:30.463 { 00:06:30.463 "transport": "TCP", 00:06:30.463 "trtype": "TCP", 00:06:30.463 "adrfam": "IPv4", 00:06:30.463 "traddr": "10.0.0.2", 00:06:30.463 "trsvcid": "4420" 00:06:30.463 } 00:06:30.463 ], 00:06:30.463 "allow_any_host": true, 00:06:30.463 "hosts": [] 00:06:30.463 }, 00:06:30.463 { 00:06:30.463 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:30.463 "subtype": "NVMe", 00:06:30.463 "listen_addresses": [ 00:06:30.463 { 00:06:30.463 "transport": "TCP", 00:06:30.463 "trtype": "TCP", 00:06:30.463 "adrfam": "IPv4", 00:06:30.463 "traddr": "10.0.0.2", 00:06:30.463 "trsvcid": "4420" 00:06:30.463 } 00:06:30.463 ], 00:06:30.463 "allow_any_host": true, 00:06:30.463 "hosts": [], 00:06:30.463 "serial_number": "SPDK00000000000001", 00:06:30.463 "model_number": "SPDK bdev Controller", 00:06:30.463 "max_namespaces": 32, 00:06:30.463 "min_cntlid": 1, 00:06:30.463 "max_cntlid": 65519, 00:06:30.463 "namespaces": [ 00:06:30.463 { 00:06:30.463 "nsid": 1, 00:06:30.463 "bdev_name": "Null1", 00:06:30.463 "name": "Null1", 00:06:30.463 "nguid": "DE22B90B30434D61B0D44BBBCE57A09C", 00:06:30.463 "uuid": "de22b90b-3043-4d61-b0d4-4bbbce57a09c" 00:06:30.463 } 00:06:30.463 ] 00:06:30.463 }, 00:06:30.463 { 00:06:30.463 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:30.463 "subtype": "NVMe", 00:06:30.463 "listen_addresses": [ 00:06:30.463 { 00:06:30.463 "transport": "TCP", 00:06:30.463 "trtype": "TCP", 00:06:30.463 "adrfam": "IPv4", 00:06:30.463 "traddr": "10.0.0.2", 00:06:30.463 "trsvcid": "4420" 00:06:30.463 } 00:06:30.463 ], 00:06:30.463 "allow_any_host": true, 00:06:30.463 "hosts": [], 00:06:30.463 "serial_number": "SPDK00000000000002", 00:06:30.463 "model_number": "SPDK bdev Controller", 00:06:30.463 "max_namespaces": 32, 00:06:30.463 "min_cntlid": 1, 00:06:30.463 "max_cntlid": 65519, 00:06:30.463 "namespaces": [ 00:06:30.463 { 00:06:30.463 "nsid": 1, 00:06:30.463 "bdev_name": "Null2", 00:06:30.463 "name": "Null2", 00:06:30.463 "nguid": "2107B1AC336045E1BDB4B50A083C0B41", 00:06:30.463 "uuid": "2107b1ac-3360-45e1-bdb4-b50a083c0b41" 00:06:30.463 } 00:06:30.463 ] 00:06:30.463 }, 00:06:30.463 { 00:06:30.463 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:30.463 "subtype": "NVMe", 00:06:30.463 "listen_addresses": [ 00:06:30.463 { 00:06:30.463 "transport": "TCP", 00:06:30.463 "trtype": "TCP", 00:06:30.463 "adrfam": "IPv4", 00:06:30.463 "traddr": "10.0.0.2", 00:06:30.463 "trsvcid": "4420" 00:06:30.463 } 00:06:30.463 ], 00:06:30.463 "allow_any_host": true, 00:06:30.463 "hosts": [], 00:06:30.463 "serial_number": "SPDK00000000000003", 00:06:30.463 "model_number": "SPDK bdev Controller", 00:06:30.463 "max_namespaces": 32, 00:06:30.463 "min_cntlid": 1, 00:06:30.463 "max_cntlid": 65519, 00:06:30.463 "namespaces": [ 00:06:30.463 { 00:06:30.463 "nsid": 1, 00:06:30.463 "bdev_name": "Null3", 00:06:30.463 "name": "Null3", 00:06:30.463 "nguid": "1E054247B0B145A386E0BDC413199695", 00:06:30.463 "uuid": "1e054247-b0b1-45a3-86e0-bdc413199695" 00:06:30.463 } 00:06:30.463 ] 00:06:30.463 }, 00:06:30.463 { 00:06:30.463 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:30.463 "subtype": "NVMe", 00:06:30.463 "listen_addresses": [ 00:06:30.463 { 00:06:30.463 "transport": "TCP", 00:06:30.463 "trtype": "TCP", 00:06:30.463 "adrfam": "IPv4", 00:06:30.463 "traddr": "10.0.0.2", 00:06:30.463 "trsvcid": "4420" 00:06:30.463 } 00:06:30.463 ], 00:06:30.463 "allow_any_host": true, 00:06:30.463 "hosts": [], 00:06:30.463 "serial_number": "SPDK00000000000004", 00:06:30.463 "model_number": "SPDK bdev Controller", 00:06:30.463 "max_namespaces": 32, 00:06:30.463 "min_cntlid": 1, 00:06:30.463 "max_cntlid": 65519, 00:06:30.463 "namespaces": [ 00:06:30.463 { 00:06:30.463 "nsid": 1, 00:06:30.463 "bdev_name": "Null4", 00:06:30.463 "name": "Null4", 00:06:30.463 "nguid": "F5F120B3BB5D43BAB66D2196DBBFC031", 00:06:30.463 "uuid": "f5f120b3-bb5d-43ba-b66d-2196dbbfc031" 00:06:30.463 } 00:06:30.463 ] 00:06:30.463 } 00:06:30.463 ] 00:06:30.463 13:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.463 13:35:33 -- target/discovery.sh@42 -- # seq 1 4 00:06:30.463 13:35:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:30.463 13:35:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:30.463 13:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.463 13:35:33 -- common/autotest_common.sh@10 -- # set +x 00:06:30.463 13:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.463 13:35:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:30.463 13:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.463 13:35:33 -- common/autotest_common.sh@10 -- # set +x 00:06:30.463 13:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.463 13:35:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:30.463 13:35:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:30.463 13:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.463 13:35:33 -- common/autotest_common.sh@10 -- # set +x 00:06:30.463 13:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.463 13:35:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:30.463 13:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.463 13:35:33 -- common/autotest_common.sh@10 -- # set +x 00:06:30.463 13:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.463 13:35:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:30.463 13:35:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:30.463 13:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.463 13:35:33 -- common/autotest_common.sh@10 -- # set +x 00:06:30.463 13:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.463 13:35:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:30.463 13:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.463 13:35:33 -- common/autotest_common.sh@10 -- # set +x 00:06:30.463 13:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.463 13:35:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:30.463 13:35:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:30.463 13:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.463 13:35:33 -- common/autotest_common.sh@10 -- # set +x 00:06:30.463 13:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.463 13:35:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:30.463 13:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.463 13:35:33 -- common/autotest_common.sh@10 -- # set +x 00:06:30.463 13:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.463 13:35:33 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:30.463 13:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.463 13:35:33 -- common/autotest_common.sh@10 -- # set +x 00:06:30.463 13:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.463 13:35:33 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:30.463 13:35:33 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:30.463 13:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.463 13:35:33 -- common/autotest_common.sh@10 -- # set +x 00:06:30.463 13:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.463 13:35:33 -- target/discovery.sh@49 -- # check_bdevs= 00:06:30.463 13:35:33 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:30.463 13:35:33 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:30.463 13:35:33 -- target/discovery.sh@57 -- # nvmftestfini 00:06:30.463 13:35:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:30.463 13:35:33 -- nvmf/common.sh@117 -- # sync 00:06:30.463 13:35:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:30.464 13:35:33 -- nvmf/common.sh@120 -- # set +e 00:06:30.464 13:35:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:30.464 13:35:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:30.464 rmmod nvme_tcp 00:06:30.464 rmmod nvme_fabrics 00:06:30.464 rmmod nvme_keyring 00:06:30.464 13:35:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:30.464 13:35:33 -- nvmf/common.sh@124 -- # set -e 00:06:30.464 13:35:33 -- nvmf/common.sh@125 -- # return 0 00:06:30.464 13:35:33 -- nvmf/common.sh@478 -- # '[' -n 2510913 ']' 00:06:30.464 13:35:33 -- nvmf/common.sh@479 -- # killprocess 2510913 00:06:30.464 13:35:33 -- common/autotest_common.sh@936 -- # '[' -z 2510913 ']' 00:06:30.464 13:35:33 -- common/autotest_common.sh@940 -- # kill -0 2510913 00:06:30.464 13:35:33 -- common/autotest_common.sh@941 -- # uname 00:06:30.464 13:35:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:30.464 13:35:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2510913 00:06:30.464 13:35:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:30.464 13:35:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:30.464 13:35:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2510913' 00:06:30.464 killing process with pid 2510913 00:06:30.464 13:35:33 -- common/autotest_common.sh@955 -- # kill 2510913 00:06:30.464 [2024-04-18 13:35:33.226195] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:06:30.464 13:35:33 -- common/autotest_common.sh@960 -- # wait 2510913 00:06:30.723 13:35:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:30.723 13:35:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:30.723 13:35:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:30.723 13:35:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:30.723 13:35:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:30.723 13:35:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.723 13:35:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:30.723 13:35:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.262 13:35:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:33.262 00:06:33.262 real 0m5.379s 00:06:33.262 user 0m4.228s 00:06:33.262 sys 0m1.802s 00:06:33.262 13:35:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.262 13:35:35 -- common/autotest_common.sh@10 -- # set +x 00:06:33.262 ************************************ 00:06:33.262 END TEST nvmf_discovery 00:06:33.262 ************************************ 00:06:33.262 13:35:35 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:33.262 13:35:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:33.262 13:35:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.262 13:35:35 -- common/autotest_common.sh@10 -- # set +x 00:06:33.262 ************************************ 00:06:33.262 START TEST nvmf_referrals 00:06:33.262 ************************************ 00:06:33.262 13:35:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:33.262 * Looking for test storage... 00:06:33.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:33.262 13:35:35 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.262 13:35:35 -- nvmf/common.sh@7 -- # uname -s 00:06:33.262 13:35:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.262 13:35:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.262 13:35:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.262 13:35:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.262 13:35:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.262 13:35:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.262 13:35:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.262 13:35:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.262 13:35:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.262 13:35:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.262 13:35:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:33.262 13:35:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:33.262 13:35:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.262 13:35:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.262 13:35:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.262 13:35:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.262 13:35:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.262 13:35:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.262 13:35:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.262 13:35:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.262 13:35:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.262 13:35:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.262 13:35:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.262 13:35:35 -- paths/export.sh@5 -- # export PATH 00:06:33.263 13:35:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.263 13:35:35 -- nvmf/common.sh@47 -- # : 0 00:06:33.263 13:35:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:33.263 13:35:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:33.263 13:35:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.263 13:35:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.263 13:35:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.263 13:35:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:33.263 13:35:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:33.263 13:35:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:33.263 13:35:35 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:33.263 13:35:35 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:33.263 13:35:35 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:33.263 13:35:35 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:33.263 13:35:35 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:33.263 13:35:35 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:33.263 13:35:35 -- target/referrals.sh@37 -- # nvmftestinit 00:06:33.263 13:35:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:33.263 13:35:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:33.263 13:35:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:33.263 13:35:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:33.263 13:35:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:33.263 13:35:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.263 13:35:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:33.263 13:35:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.263 13:35:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:33.263 13:35:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:33.263 13:35:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:33.263 13:35:35 -- common/autotest_common.sh@10 -- # set +x 00:06:35.173 13:35:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:35.173 13:35:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:35.173 13:35:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:35.173 13:35:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:35.173 13:35:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:35.173 13:35:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:35.173 13:35:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:35.173 13:35:37 -- nvmf/common.sh@295 -- # net_devs=() 00:06:35.173 13:35:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:35.173 13:35:37 -- nvmf/common.sh@296 -- # e810=() 00:06:35.173 13:35:37 -- nvmf/common.sh@296 -- # local -ga e810 00:06:35.173 13:35:37 -- nvmf/common.sh@297 -- # x722=() 00:06:35.173 13:35:37 -- nvmf/common.sh@297 -- # local -ga x722 00:06:35.173 13:35:37 -- nvmf/common.sh@298 -- # mlx=() 00:06:35.173 13:35:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:35.173 13:35:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:35.173 13:35:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:35.173 13:35:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:35.173 13:35:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:35.173 13:35:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:35.173 13:35:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:35.173 13:35:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:35.173 13:35:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:35.173 13:35:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:35.173 13:35:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:35.173 13:35:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:35.173 13:35:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:35.173 13:35:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:35.173 13:35:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:35.173 13:35:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:35.173 13:35:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:35.173 13:35:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:35.173 13:35:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:35.173 13:35:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:35.173 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:35.173 13:35:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:35.173 13:35:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:35.173 13:35:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.173 13:35:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.173 13:35:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:35.174 13:35:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:35.174 13:35:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:35.174 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:35.174 13:35:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:35.174 13:35:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:35.174 13:35:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.174 13:35:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.174 13:35:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:35.174 13:35:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:35.174 13:35:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:35.174 13:35:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:35.174 13:35:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:35.174 13:35:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.174 13:35:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:35.174 13:35:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.174 13:35:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:35.174 Found net devices under 0000:84:00.0: cvl_0_0 00:06:35.174 13:35:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.174 13:35:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:35.174 13:35:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.174 13:35:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:35.174 13:35:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.174 13:35:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:35.174 Found net devices under 0000:84:00.1: cvl_0_1 00:06:35.174 13:35:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.174 13:35:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:35.174 13:35:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:35.174 13:35:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:35.174 13:35:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:35.174 13:35:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:35.174 13:35:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:35.174 13:35:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:35.174 13:35:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:35.174 13:35:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:35.174 13:35:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:35.174 13:35:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:35.174 13:35:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:35.174 13:35:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:35.174 13:35:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:35.174 13:35:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:35.174 13:35:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:35.174 13:35:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:35.174 13:35:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:35.174 13:35:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:35.174 13:35:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:35.174 13:35:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:35.174 13:35:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:35.174 13:35:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:35.174 13:35:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:35.174 13:35:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:35.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:35.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:06:35.174 00:06:35.174 --- 10.0.0.2 ping statistics --- 00:06:35.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.174 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:06:35.174 13:35:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:35.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:35.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:06:35.174 00:06:35.174 --- 10.0.0.1 ping statistics --- 00:06:35.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.174 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:06:35.174 13:35:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:35.174 13:35:37 -- nvmf/common.sh@411 -- # return 0 00:06:35.174 13:35:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:35.174 13:35:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:35.174 13:35:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:35.174 13:35:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:35.174 13:35:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:35.174 13:35:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:35.174 13:35:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:35.174 13:35:37 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:35.174 13:35:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:35.174 13:35:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:35.174 13:35:37 -- common/autotest_common.sh@10 -- # set +x 00:06:35.174 13:35:37 -- nvmf/common.sh@470 -- # nvmfpid=2513021 00:06:35.174 13:35:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:35.174 13:35:37 -- nvmf/common.sh@471 -- # waitforlisten 2513021 00:06:35.174 13:35:37 -- common/autotest_common.sh@817 -- # '[' -z 2513021 ']' 00:06:35.174 13:35:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.174 13:35:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:35.174 13:35:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.174 13:35:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:35.174 13:35:37 -- common/autotest_common.sh@10 -- # set +x 00:06:35.174 [2024-04-18 13:35:37.924695] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:35.174 [2024-04-18 13:35:37.924771] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.174 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.459 [2024-04-18 13:35:37.998716] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.459 [2024-04-18 13:35:38.121081] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:35.459 [2024-04-18 13:35:38.121142] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:35.459 [2024-04-18 13:35:38.121189] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.459 [2024-04-18 13:35:38.121214] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.459 [2024-04-18 13:35:38.121232] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:35.459 [2024-04-18 13:35:38.121302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.459 [2024-04-18 13:35:38.121329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.459 [2024-04-18 13:35:38.121384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.459 [2024-04-18 13:35:38.121390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.393 13:35:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:36.393 13:35:38 -- common/autotest_common.sh@850 -- # return 0 00:06:36.393 13:35:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:36.393 13:35:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:36.393 13:35:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.393 13:35:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:36.393 13:35:38 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:36.393 13:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.393 13:35:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.393 [2024-04-18 13:35:38.945280] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.393 13:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.393 13:35:38 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:36.393 13:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.393 13:35:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.393 [2024-04-18 13:35:38.957560] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:36.393 13:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.393 13:35:38 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:36.393 13:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.393 13:35:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.393 13:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.393 13:35:38 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:36.393 13:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.393 13:35:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.393 13:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.393 13:35:38 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:36.393 13:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.393 13:35:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.393 13:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.393 13:35:38 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:36.393 13:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.393 13:35:38 -- target/referrals.sh@48 -- # jq length 00:06:36.393 13:35:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.393 13:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.393 13:35:39 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:36.393 13:35:39 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:36.393 13:35:39 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:36.393 13:35:39 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:36.393 13:35:39 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:36.393 13:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.393 13:35:39 -- target/referrals.sh@21 -- # sort 00:06:36.393 13:35:39 -- common/autotest_common.sh@10 -- # set +x 00:06:36.393 13:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.393 13:35:39 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:36.393 13:35:39 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:36.393 13:35:39 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:36.393 13:35:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:36.393 13:35:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:36.393 13:35:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:36.393 13:35:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:36.393 13:35:39 -- target/referrals.sh@26 -- # sort 00:06:36.651 13:35:39 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:36.651 13:35:39 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:36.651 13:35:39 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:36.651 13:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.651 13:35:39 -- common/autotest_common.sh@10 -- # set +x 00:06:36.651 13:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.651 13:35:39 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:36.651 13:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.651 13:35:39 -- common/autotest_common.sh@10 -- # set +x 00:06:36.651 13:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.651 13:35:39 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:36.651 13:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.651 13:35:39 -- common/autotest_common.sh@10 -- # set +x 00:06:36.651 13:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.651 13:35:39 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:36.651 13:35:39 -- target/referrals.sh@56 -- # jq length 00:06:36.651 13:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.651 13:35:39 -- common/autotest_common.sh@10 -- # set +x 00:06:36.651 13:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.651 13:35:39 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:36.651 13:35:39 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:36.651 13:35:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:36.651 13:35:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:36.651 13:35:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:36.651 13:35:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:36.651 13:35:39 -- target/referrals.sh@26 -- # sort 00:06:36.651 13:35:39 -- target/referrals.sh@26 -- # echo 00:06:36.651 13:35:39 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:36.651 13:35:39 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:36.651 13:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.651 13:35:39 -- common/autotest_common.sh@10 -- # set +x 00:06:36.651 13:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.652 13:35:39 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:36.652 13:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.652 13:35:39 -- common/autotest_common.sh@10 -- # set +x 00:06:36.652 13:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.652 13:35:39 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:36.652 13:35:39 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:36.652 13:35:39 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:36.652 13:35:39 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:36.652 13:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.652 13:35:39 -- common/autotest_common.sh@10 -- # set +x 00:06:36.652 13:35:39 -- target/referrals.sh@21 -- # sort 00:06:36.652 13:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.652 13:35:39 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:36.652 13:35:39 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:36.652 13:35:39 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:36.652 13:35:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:36.652 13:35:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:36.652 13:35:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:36.652 13:35:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:36.652 13:35:39 -- target/referrals.sh@26 -- # sort 00:06:36.910 13:35:39 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:36.910 13:35:39 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:36.910 13:35:39 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:36.910 13:35:39 -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:36.910 13:35:39 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:36.910 13:35:39 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:36.910 13:35:39 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:36.910 13:35:39 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:37.167 13:35:39 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:37.167 13:35:39 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:37.167 13:35:39 -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:37.167 13:35:39 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:37.167 13:35:39 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:37.167 13:35:39 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:37.167 13:35:39 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:37.167 13:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:37.167 13:35:39 -- common/autotest_common.sh@10 -- # set +x 00:06:37.167 13:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.167 13:35:39 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:37.167 13:35:39 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:37.167 13:35:39 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:37.167 13:35:39 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:37.167 13:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:37.167 13:35:39 -- common/autotest_common.sh@10 -- # set +x 00:06:37.167 13:35:39 -- target/referrals.sh@21 -- # sort 00:06:37.167 13:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.167 13:35:39 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:37.167 13:35:39 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:37.167 13:35:39 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:37.167 13:35:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:37.167 13:35:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:37.167 13:35:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:37.167 13:35:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:37.167 13:35:39 -- target/referrals.sh@26 -- # sort 00:06:37.167 13:35:39 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:37.167 13:35:39 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:37.167 13:35:39 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:37.167 13:35:39 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:37.167 13:35:39 -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:37.167 13:35:39 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:37.167 13:35:39 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:37.425 13:35:40 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:37.425 13:35:40 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:37.425 13:35:40 -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:37.425 13:35:40 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:37.425 13:35:40 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:37.425 13:35:40 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:37.425 13:35:40 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:37.425 13:35:40 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:37.425 13:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:37.425 13:35:40 -- common/autotest_common.sh@10 -- # set +x 00:06:37.425 13:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.425 13:35:40 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:37.425 13:35:40 -- target/referrals.sh@82 -- # jq length 00:06:37.425 13:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:37.425 13:35:40 -- common/autotest_common.sh@10 -- # set +x 00:06:37.425 13:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.425 13:35:40 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:37.425 13:35:40 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:37.425 13:35:40 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:37.425 13:35:40 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:37.425 13:35:40 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:37.425 13:35:40 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:37.425 13:35:40 -- target/referrals.sh@26 -- # sort 00:06:37.682 13:35:40 -- target/referrals.sh@26 -- # echo 00:06:37.682 13:35:40 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:37.682 13:35:40 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:37.682 13:35:40 -- target/referrals.sh@86 -- # nvmftestfini 00:06:37.682 13:35:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:37.682 13:35:40 -- nvmf/common.sh@117 -- # sync 00:06:37.682 13:35:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:37.682 13:35:40 -- nvmf/common.sh@120 -- # set +e 00:06:37.682 13:35:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:37.682 13:35:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:37.682 rmmod nvme_tcp 00:06:37.682 rmmod nvme_fabrics 00:06:37.682 rmmod nvme_keyring 00:06:37.682 13:35:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:37.682 13:35:40 -- nvmf/common.sh@124 -- # set -e 00:06:37.682 13:35:40 -- nvmf/common.sh@125 -- # return 0 00:06:37.682 13:35:40 -- nvmf/common.sh@478 -- # '[' -n 2513021 ']' 00:06:37.682 13:35:40 -- nvmf/common.sh@479 -- # killprocess 2513021 00:06:37.682 13:35:40 -- common/autotest_common.sh@936 -- # '[' -z 2513021 ']' 00:06:37.682 13:35:40 -- common/autotest_common.sh@940 -- # kill -0 2513021 00:06:37.682 13:35:40 -- common/autotest_common.sh@941 -- # uname 00:06:37.682 13:35:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:37.682 13:35:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2513021 00:06:37.682 13:35:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:37.682 13:35:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:37.682 13:35:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2513021' 00:06:37.682 killing process with pid 2513021 00:06:37.682 13:35:40 -- common/autotest_common.sh@955 -- # kill 2513021 00:06:37.682 13:35:40 -- common/autotest_common.sh@960 -- # wait 2513021 00:06:37.940 13:35:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:37.940 13:35:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:37.940 13:35:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:37.940 13:35:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:37.940 13:35:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:37.940 13:35:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.940 13:35:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:37.940 13:35:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.059 13:35:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:40.059 00:06:40.059 real 0m7.063s 00:06:40.059 user 0m11.559s 00:06:40.059 sys 0m2.059s 00:06:40.059 13:35:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.059 13:35:42 -- common/autotest_common.sh@10 -- # set +x 00:06:40.059 ************************************ 00:06:40.059 END TEST nvmf_referrals 00:06:40.059 ************************************ 00:06:40.059 13:35:42 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:40.059 13:35:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:40.059 13:35:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.059 13:35:42 -- common/autotest_common.sh@10 -- # set +x 00:06:40.059 ************************************ 00:06:40.059 START TEST nvmf_connect_disconnect 00:06:40.059 ************************************ 00:06:40.059 13:35:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:40.317 * Looking for test storage... 00:06:40.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:40.317 13:35:42 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.317 13:35:42 -- nvmf/common.sh@7 -- # uname -s 00:06:40.317 13:35:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.317 13:35:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.317 13:35:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.317 13:35:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.317 13:35:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.317 13:35:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.317 13:35:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.317 13:35:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.317 13:35:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.317 13:35:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.317 13:35:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:40.317 13:35:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:40.317 13:35:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.317 13:35:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.317 13:35:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.317 13:35:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.317 13:35:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.317 13:35:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.317 13:35:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.317 13:35:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.317 13:35:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.317 13:35:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.317 13:35:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.317 13:35:42 -- paths/export.sh@5 -- # export PATH 00:06:40.317 13:35:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.317 13:35:42 -- nvmf/common.sh@47 -- # : 0 00:06:40.317 13:35:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:40.317 13:35:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:40.317 13:35:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.317 13:35:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.317 13:35:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.317 13:35:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:40.317 13:35:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:40.317 13:35:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:40.317 13:35:42 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:40.317 13:35:42 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:40.317 13:35:42 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:40.317 13:35:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:40.317 13:35:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.317 13:35:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:40.317 13:35:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:40.317 13:35:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:40.317 13:35:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.317 13:35:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:40.317 13:35:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.317 13:35:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:40.317 13:35:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:40.317 13:35:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:40.317 13:35:42 -- common/autotest_common.sh@10 -- # set +x 00:06:42.217 13:35:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:42.217 13:35:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:42.217 13:35:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:42.217 13:35:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:42.217 13:35:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:42.217 13:35:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:42.217 13:35:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:42.217 13:35:44 -- nvmf/common.sh@295 -- # net_devs=() 00:06:42.217 13:35:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:42.217 13:35:44 -- nvmf/common.sh@296 -- # e810=() 00:06:42.217 13:35:44 -- nvmf/common.sh@296 -- # local -ga e810 00:06:42.217 13:35:44 -- nvmf/common.sh@297 -- # x722=() 00:06:42.217 13:35:44 -- nvmf/common.sh@297 -- # local -ga x722 00:06:42.217 13:35:44 -- nvmf/common.sh@298 -- # mlx=() 00:06:42.217 13:35:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:42.217 13:35:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.217 13:35:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.218 13:35:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.218 13:35:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.218 13:35:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.218 13:35:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.218 13:35:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.218 13:35:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.218 13:35:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.218 13:35:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.218 13:35:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.218 13:35:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:42.218 13:35:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:42.218 13:35:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:42.218 13:35:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.218 13:35:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:42.218 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:42.218 13:35:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.218 13:35:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:42.218 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:42.218 13:35:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:42.218 13:35:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.218 13:35:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.218 13:35:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:42.218 13:35:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.218 13:35:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:42.218 Found net devices under 0000:84:00.0: cvl_0_0 00:06:42.218 13:35:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.218 13:35:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.218 13:35:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.218 13:35:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:42.218 13:35:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.218 13:35:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:42.218 Found net devices under 0000:84:00.1: cvl_0_1 00:06:42.218 13:35:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.218 13:35:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:42.218 13:35:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:42.218 13:35:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:42.218 13:35:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:42.218 13:35:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.218 13:35:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.218 13:35:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.218 13:35:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:42.218 13:35:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.218 13:35:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.218 13:35:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:42.218 13:35:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.218 13:35:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.218 13:35:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:42.218 13:35:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:42.218 13:35:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.218 13:35:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.218 13:35:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.218 13:35:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.218 13:35:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:42.218 13:35:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.218 13:35:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.218 13:35:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.218 13:35:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:42.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:06:42.218 00:06:42.218 --- 10.0.0.2 ping statistics --- 00:06:42.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.218 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:06:42.218 13:35:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:06:42.218 00:06:42.218 --- 10.0.0.1 ping statistics --- 00:06:42.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.218 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:06:42.218 13:35:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.218 13:35:45 -- nvmf/common.sh@411 -- # return 0 00:06:42.218 13:35:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:42.218 13:35:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.218 13:35:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:42.218 13:35:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:42.218 13:35:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.218 13:35:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:42.218 13:35:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:42.475 13:35:45 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:42.475 13:35:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:42.475 13:35:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:42.475 13:35:45 -- common/autotest_common.sh@10 -- # set +x 00:06:42.475 13:35:45 -- nvmf/common.sh@470 -- # nvmfpid=2515468 00:06:42.475 13:35:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:42.475 13:35:45 -- nvmf/common.sh@471 -- # waitforlisten 2515468 00:06:42.475 13:35:45 -- common/autotest_common.sh@817 -- # '[' -z 2515468 ']' 00:06:42.475 13:35:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.475 13:35:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:42.475 13:35:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.475 13:35:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:42.475 13:35:45 -- common/autotest_common.sh@10 -- # set +x 00:06:42.475 [2024-04-18 13:35:45.089027] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:06:42.475 [2024-04-18 13:35:45.089124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.476 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.476 [2024-04-18 13:35:45.154800] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.476 [2024-04-18 13:35:45.265219] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.476 [2024-04-18 13:35:45.265281] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.476 [2024-04-18 13:35:45.265303] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.476 [2024-04-18 13:35:45.265322] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.476 [2024-04-18 13:35:45.265337] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.476 [2024-04-18 13:35:45.265412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.476 [2024-04-18 13:35:45.265475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.476 [2024-04-18 13:35:45.265541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.476 [2024-04-18 13:35:45.265547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.733 13:35:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:42.733 13:35:45 -- common/autotest_common.sh@850 -- # return 0 00:06:42.733 13:35:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:42.733 13:35:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:42.733 13:35:45 -- common/autotest_common.sh@10 -- # set +x 00:06:42.733 13:35:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.733 13:35:45 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:42.733 13:35:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:42.733 13:35:45 -- common/autotest_common.sh@10 -- # set +x 00:06:42.733 [2024-04-18 13:35:45.421010] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.733 13:35:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:42.733 13:35:45 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:42.733 13:35:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:42.733 13:35:45 -- common/autotest_common.sh@10 -- # set +x 00:06:42.733 13:35:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:42.733 13:35:45 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:42.733 13:35:45 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:42.733 13:35:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:42.733 13:35:45 -- common/autotest_common.sh@10 -- # set +x 00:06:42.733 13:35:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:42.733 13:35:45 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:42.733 13:35:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:42.733 13:35:45 -- common/autotest_common.sh@10 -- # set +x 00:06:42.733 13:35:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:42.733 13:35:45 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:42.733 13:35:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:42.733 13:35:45 -- common/autotest_common.sh@10 -- # set +x 00:06:42.733 [2024-04-18 13:35:45.472911] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:42.733 13:35:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:42.733 13:35:45 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:42.733 13:35:45 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:42.733 13:35:45 -- target/connect_disconnect.sh@34 -- # set +x 00:06:46.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:48.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:51.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:53.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:56.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:56.928 13:35:59 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:56.928 13:35:59 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:56.928 13:35:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:56.928 13:35:59 -- nvmf/common.sh@117 -- # sync 00:06:56.928 13:35:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:56.928 13:35:59 -- nvmf/common.sh@120 -- # set +e 00:06:56.928 13:35:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:56.928 13:35:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:56.928 rmmod nvme_tcp 00:06:56.928 rmmod nvme_fabrics 00:06:56.928 rmmod nvme_keyring 00:06:56.928 13:35:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:56.928 13:35:59 -- nvmf/common.sh@124 -- # set -e 00:06:56.928 13:35:59 -- nvmf/common.sh@125 -- # return 0 00:06:56.928 13:35:59 -- nvmf/common.sh@478 -- # '[' -n 2515468 ']' 00:06:56.928 13:35:59 -- nvmf/common.sh@479 -- # killprocess 2515468 00:06:56.928 13:35:59 -- common/autotest_common.sh@936 -- # '[' -z 2515468 ']' 00:06:56.928 13:35:59 -- common/autotest_common.sh@940 -- # kill -0 2515468 00:06:56.928 13:35:59 -- common/autotest_common.sh@941 -- # uname 00:06:56.928 13:35:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:56.928 13:35:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2515468 00:06:56.928 13:35:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:56.928 13:35:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:56.928 13:35:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2515468' 00:06:56.928 killing process with pid 2515468 00:06:56.928 13:35:59 -- common/autotest_common.sh@955 -- # kill 2515468 00:06:56.928 13:35:59 -- common/autotest_common.sh@960 -- # wait 2515468 00:06:56.928 13:35:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:56.928 13:35:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:56.928 13:35:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:56.928 13:35:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:56.928 13:35:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:56.928 13:35:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.928 13:35:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:56.928 13:35:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.830 13:36:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:58.830 00:06:58.830 real 0m18.604s 00:06:58.830 user 0m55.937s 00:06:58.830 sys 0m3.014s 00:06:58.830 13:36:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:58.830 13:36:01 -- common/autotest_common.sh@10 -- # set +x 00:06:58.830 ************************************ 00:06:58.830 END TEST nvmf_connect_disconnect 00:06:58.830 ************************************ 00:06:58.830 13:36:01 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:58.830 13:36:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:58.830 13:36:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.830 13:36:01 -- common/autotest_common.sh@10 -- # set +x 00:06:58.830 ************************************ 00:06:58.830 START TEST nvmf_multitarget 00:06:58.830 ************************************ 00:06:58.830 13:36:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:58.830 * Looking for test storage... 00:06:58.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.830 13:36:01 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.830 13:36:01 -- nvmf/common.sh@7 -- # uname -s 00:06:58.830 13:36:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.830 13:36:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.830 13:36:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.830 13:36:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.830 13:36:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.830 13:36:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.830 13:36:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.830 13:36:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.830 13:36:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.089 13:36:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.089 13:36:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:59.089 13:36:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:59.089 13:36:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.089 13:36:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.089 13:36:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:59.089 13:36:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.089 13:36:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:59.089 13:36:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.089 13:36:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.089 13:36:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.089 13:36:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.089 13:36:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.089 13:36:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.089 13:36:01 -- paths/export.sh@5 -- # export PATH 00:06:59.089 13:36:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.089 13:36:01 -- nvmf/common.sh@47 -- # : 0 00:06:59.089 13:36:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:59.089 13:36:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:59.089 13:36:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.089 13:36:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.089 13:36:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.089 13:36:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:59.089 13:36:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:59.089 13:36:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:59.089 13:36:01 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:06:59.089 13:36:01 -- target/multitarget.sh@15 -- # nvmftestinit 00:06:59.089 13:36:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:59.089 13:36:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.089 13:36:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:59.089 13:36:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:59.090 13:36:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:59.090 13:36:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.090 13:36:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:59.090 13:36:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.090 13:36:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:59.090 13:36:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:59.090 13:36:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:59.090 13:36:01 -- common/autotest_common.sh@10 -- # set +x 00:07:00.995 13:36:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:00.995 13:36:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:00.995 13:36:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:00.995 13:36:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:00.995 13:36:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:00.995 13:36:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:00.995 13:36:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:00.995 13:36:03 -- nvmf/common.sh@295 -- # net_devs=() 00:07:00.995 13:36:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:00.995 13:36:03 -- nvmf/common.sh@296 -- # e810=() 00:07:00.995 13:36:03 -- nvmf/common.sh@296 -- # local -ga e810 00:07:00.995 13:36:03 -- nvmf/common.sh@297 -- # x722=() 00:07:00.995 13:36:03 -- nvmf/common.sh@297 -- # local -ga x722 00:07:00.995 13:36:03 -- nvmf/common.sh@298 -- # mlx=() 00:07:00.995 13:36:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:00.995 13:36:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.995 13:36:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.995 13:36:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.995 13:36:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.995 13:36:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.995 13:36:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.995 13:36:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.995 13:36:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.995 13:36:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.995 13:36:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.995 13:36:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.995 13:36:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:00.995 13:36:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:00.995 13:36:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:00.995 13:36:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.995 13:36:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:00.995 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:00.995 13:36:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.995 13:36:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:00.995 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:00.995 13:36:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:00.995 13:36:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.995 13:36:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.995 13:36:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:00.995 13:36:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.995 13:36:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:00.995 Found net devices under 0000:84:00.0: cvl_0_0 00:07:00.995 13:36:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.995 13:36:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.995 13:36:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.995 13:36:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:00.995 13:36:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.995 13:36:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:00.995 Found net devices under 0000:84:00.1: cvl_0_1 00:07:00.995 13:36:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.995 13:36:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:00.995 13:36:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:00.995 13:36:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:00.995 13:36:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.995 13:36:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.995 13:36:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.995 13:36:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:00.995 13:36:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.995 13:36:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.995 13:36:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:00.995 13:36:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.995 13:36:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.995 13:36:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:00.995 13:36:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:00.995 13:36:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.995 13:36:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.995 13:36:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.995 13:36:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.995 13:36:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:00.995 13:36:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.995 13:36:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.995 13:36:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.995 13:36:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:00.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:07:00.995 00:07:00.995 --- 10.0.0.2 ping statistics --- 00:07:00.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.995 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:07:00.995 13:36:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:07:00.995 00:07:00.995 --- 10.0.0.1 ping statistics --- 00:07:00.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.995 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:07:00.995 13:36:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.995 13:36:03 -- nvmf/common.sh@411 -- # return 0 00:07:00.995 13:36:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:00.995 13:36:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.995 13:36:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:00.995 13:36:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.995 13:36:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:00.995 13:36:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:00.995 13:36:03 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:00.995 13:36:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:00.995 13:36:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:00.995 13:36:03 -- common/autotest_common.sh@10 -- # set +x 00:07:00.995 13:36:03 -- nvmf/common.sh@470 -- # nvmfpid=2519245 00:07:00.995 13:36:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:00.995 13:36:03 -- nvmf/common.sh@471 -- # waitforlisten 2519245 00:07:00.995 13:36:03 -- common/autotest_common.sh@817 -- # '[' -z 2519245 ']' 00:07:00.995 13:36:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.995 13:36:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:00.995 13:36:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.995 13:36:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:00.995 13:36:03 -- common/autotest_common.sh@10 -- # set +x 00:07:00.995 [2024-04-18 13:36:03.752845] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:07:00.995 [2024-04-18 13:36:03.752919] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.995 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.253 [2024-04-18 13:36:03.816834] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.253 [2024-04-18 13:36:03.927489] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.253 [2024-04-18 13:36:03.927552] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.254 [2024-04-18 13:36:03.927572] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.254 [2024-04-18 13:36:03.927591] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.254 [2024-04-18 13:36:03.927607] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.254 [2024-04-18 13:36:03.927686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.254 [2024-04-18 13:36:03.927755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.254 [2024-04-18 13:36:03.927822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.254 [2024-04-18 13:36:03.927828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.254 13:36:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:01.254 13:36:04 -- common/autotest_common.sh@850 -- # return 0 00:07:01.254 13:36:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:01.254 13:36:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:01.254 13:36:04 -- common/autotest_common.sh@10 -- # set +x 00:07:01.512 13:36:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.512 13:36:04 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:01.512 13:36:04 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:01.512 13:36:04 -- target/multitarget.sh@21 -- # jq length 00:07:01.512 13:36:04 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:01.512 13:36:04 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:01.512 "nvmf_tgt_1" 00:07:01.512 13:36:04 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:01.770 "nvmf_tgt_2" 00:07:01.770 13:36:04 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:01.770 13:36:04 -- target/multitarget.sh@28 -- # jq length 00:07:01.770 13:36:04 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:01.770 13:36:04 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:02.028 true 00:07:02.028 13:36:04 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:02.028 true 00:07:02.028 13:36:04 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:02.028 13:36:04 -- target/multitarget.sh@35 -- # jq length 00:07:02.288 13:36:04 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:02.288 13:36:04 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:02.288 13:36:04 -- target/multitarget.sh@41 -- # nvmftestfini 00:07:02.288 13:36:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:02.288 13:36:04 -- nvmf/common.sh@117 -- # sync 00:07:02.288 13:36:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:02.288 13:36:04 -- nvmf/common.sh@120 -- # set +e 00:07:02.288 13:36:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:02.288 13:36:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:02.288 rmmod nvme_tcp 00:07:02.288 rmmod nvme_fabrics 00:07:02.288 rmmod nvme_keyring 00:07:02.288 13:36:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:02.288 13:36:04 -- nvmf/common.sh@124 -- # set -e 00:07:02.288 13:36:04 -- nvmf/common.sh@125 -- # return 0 00:07:02.288 13:36:04 -- nvmf/common.sh@478 -- # '[' -n 2519245 ']' 00:07:02.288 13:36:04 -- nvmf/common.sh@479 -- # killprocess 2519245 00:07:02.288 13:36:04 -- common/autotest_common.sh@936 -- # '[' -z 2519245 ']' 00:07:02.288 13:36:04 -- common/autotest_common.sh@940 -- # kill -0 2519245 00:07:02.288 13:36:04 -- common/autotest_common.sh@941 -- # uname 00:07:02.288 13:36:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.288 13:36:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2519245 00:07:02.288 13:36:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.288 13:36:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.288 13:36:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2519245' 00:07:02.288 killing process with pid 2519245 00:07:02.288 13:36:04 -- common/autotest_common.sh@955 -- # kill 2519245 00:07:02.288 13:36:04 -- common/autotest_common.sh@960 -- # wait 2519245 00:07:02.549 13:36:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:02.549 13:36:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:02.549 13:36:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:02.549 13:36:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:02.549 13:36:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:02.549 13:36:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.549 13:36:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.549 13:36:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.090 13:36:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:05.090 00:07:05.090 real 0m5.699s 00:07:05.090 user 0m6.476s 00:07:05.090 sys 0m1.868s 00:07:05.090 13:36:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:05.090 13:36:07 -- common/autotest_common.sh@10 -- # set +x 00:07:05.090 ************************************ 00:07:05.090 END TEST nvmf_multitarget 00:07:05.090 ************************************ 00:07:05.090 13:36:07 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:05.090 13:36:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:05.090 13:36:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.090 13:36:07 -- common/autotest_common.sh@10 -- # set +x 00:07:05.090 ************************************ 00:07:05.090 START TEST nvmf_rpc 00:07:05.090 ************************************ 00:07:05.090 13:36:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:05.090 * Looking for test storage... 00:07:05.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.090 13:36:07 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.090 13:36:07 -- nvmf/common.sh@7 -- # uname -s 00:07:05.090 13:36:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.090 13:36:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.090 13:36:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.090 13:36:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.090 13:36:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.090 13:36:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.090 13:36:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.090 13:36:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.090 13:36:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.090 13:36:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.090 13:36:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:05.090 13:36:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:05.090 13:36:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.090 13:36:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.090 13:36:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.090 13:36:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.090 13:36:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.090 13:36:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.090 13:36:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.090 13:36:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.090 13:36:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.090 13:36:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.090 13:36:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.090 13:36:07 -- paths/export.sh@5 -- # export PATH 00:07:05.090 13:36:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.090 13:36:07 -- nvmf/common.sh@47 -- # : 0 00:07:05.090 13:36:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.090 13:36:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.090 13:36:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.090 13:36:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.090 13:36:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.090 13:36:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.090 13:36:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.090 13:36:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.090 13:36:07 -- target/rpc.sh@11 -- # loops=5 00:07:05.090 13:36:07 -- target/rpc.sh@23 -- # nvmftestinit 00:07:05.090 13:36:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:05.090 13:36:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.090 13:36:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:05.090 13:36:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:05.090 13:36:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:05.090 13:36:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.090 13:36:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.090 13:36:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.090 13:36:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:05.090 13:36:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:05.090 13:36:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.090 13:36:07 -- common/autotest_common.sh@10 -- # set +x 00:07:06.993 13:36:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:06.993 13:36:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:06.993 13:36:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:06.993 13:36:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:06.993 13:36:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:06.993 13:36:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:06.993 13:36:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:06.993 13:36:09 -- nvmf/common.sh@295 -- # net_devs=() 00:07:06.993 13:36:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:06.993 13:36:09 -- nvmf/common.sh@296 -- # e810=() 00:07:06.993 13:36:09 -- nvmf/common.sh@296 -- # local -ga e810 00:07:06.993 13:36:09 -- nvmf/common.sh@297 -- # x722=() 00:07:06.993 13:36:09 -- nvmf/common.sh@297 -- # local -ga x722 00:07:06.993 13:36:09 -- nvmf/common.sh@298 -- # mlx=() 00:07:06.993 13:36:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:06.993 13:36:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.993 13:36:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.993 13:36:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.993 13:36:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.993 13:36:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.993 13:36:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.993 13:36:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.993 13:36:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.993 13:36:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.993 13:36:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.993 13:36:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.993 13:36:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:06.993 13:36:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:06.993 13:36:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:06.993 13:36:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.993 13:36:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:06.993 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:06.993 13:36:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.993 13:36:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:06.993 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:06.993 13:36:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:06.993 13:36:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.993 13:36:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.993 13:36:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:06.993 13:36:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.993 13:36:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:06.993 Found net devices under 0000:84:00.0: cvl_0_0 00:07:06.993 13:36:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.993 13:36:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.993 13:36:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.993 13:36:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:06.993 13:36:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.993 13:36:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:06.993 Found net devices under 0000:84:00.1: cvl_0_1 00:07:06.993 13:36:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.993 13:36:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:06.993 13:36:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:06.993 13:36:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:06.993 13:36:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:06.993 13:36:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.993 13:36:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.993 13:36:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.993 13:36:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:06.994 13:36:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.994 13:36:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.994 13:36:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:06.994 13:36:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.994 13:36:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.994 13:36:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:06.994 13:36:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:06.994 13:36:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.994 13:36:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.994 13:36:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.994 13:36:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.994 13:36:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:06.994 13:36:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.994 13:36:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.994 13:36:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.994 13:36:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:06.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:07:06.994 00:07:06.994 --- 10.0.0.2 ping statistics --- 00:07:06.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.994 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:07:06.994 13:36:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:07:06.994 00:07:06.994 --- 10.0.0.1 ping statistics --- 00:07:06.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.994 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:07:06.994 13:36:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.994 13:36:09 -- nvmf/common.sh@411 -- # return 0 00:07:06.994 13:36:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:06.994 13:36:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.994 13:36:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:06.994 13:36:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:06.994 13:36:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.994 13:36:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:06.994 13:36:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:06.994 13:36:09 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:06.994 13:36:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:06.994 13:36:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:06.994 13:36:09 -- common/autotest_common.sh@10 -- # set +x 00:07:06.994 13:36:09 -- nvmf/common.sh@470 -- # nvmfpid=2521871 00:07:06.994 13:36:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:06.994 13:36:09 -- nvmf/common.sh@471 -- # waitforlisten 2521871 00:07:06.994 13:36:09 -- common/autotest_common.sh@817 -- # '[' -z 2521871 ']' 00:07:06.994 13:36:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.994 13:36:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:06.994 13:36:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.994 13:36:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:06.994 13:36:09 -- common/autotest_common.sh@10 -- # set +x 00:07:06.994 [2024-04-18 13:36:09.664759] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:07:06.994 [2024-04-18 13:36:09.664839] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.994 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.994 [2024-04-18 13:36:09.736280] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.252 [2024-04-18 13:36:09.857615] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.253 [2024-04-18 13:36:09.857680] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.253 [2024-04-18 13:36:09.857706] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.253 [2024-04-18 13:36:09.857726] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.253 [2024-04-18 13:36:09.857743] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.253 [2024-04-18 13:36:09.858139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.253 [2024-04-18 13:36:09.858206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.253 [2024-04-18 13:36:09.858235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.253 [2024-04-18 13:36:09.858239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.822 13:36:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:07.822 13:36:10 -- common/autotest_common.sh@850 -- # return 0 00:07:07.822 13:36:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:07.822 13:36:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:07.822 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:07:08.081 13:36:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.081 13:36:10 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:08.081 13:36:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.081 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:07:08.081 13:36:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.081 13:36:10 -- target/rpc.sh@26 -- # stats='{ 00:07:08.081 "tick_rate": 2700000000, 00:07:08.081 "poll_groups": [ 00:07:08.081 { 00:07:08.081 "name": "nvmf_tgt_poll_group_0", 00:07:08.081 "admin_qpairs": 0, 00:07:08.081 "io_qpairs": 0, 00:07:08.081 "current_admin_qpairs": 0, 00:07:08.081 "current_io_qpairs": 0, 00:07:08.081 "pending_bdev_io": 0, 00:07:08.081 "completed_nvme_io": 0, 00:07:08.081 "transports": [] 00:07:08.081 }, 00:07:08.081 { 00:07:08.081 "name": "nvmf_tgt_poll_group_1", 00:07:08.081 "admin_qpairs": 0, 00:07:08.081 "io_qpairs": 0, 00:07:08.081 "current_admin_qpairs": 0, 00:07:08.081 "current_io_qpairs": 0, 00:07:08.081 "pending_bdev_io": 0, 00:07:08.081 "completed_nvme_io": 0, 00:07:08.081 "transports": [] 00:07:08.081 }, 00:07:08.081 { 00:07:08.081 "name": "nvmf_tgt_poll_group_2", 00:07:08.081 "admin_qpairs": 0, 00:07:08.081 "io_qpairs": 0, 00:07:08.082 "current_admin_qpairs": 0, 00:07:08.082 "current_io_qpairs": 0, 00:07:08.082 "pending_bdev_io": 0, 00:07:08.082 "completed_nvme_io": 0, 00:07:08.082 "transports": [] 00:07:08.082 }, 00:07:08.082 { 00:07:08.082 "name": "nvmf_tgt_poll_group_3", 00:07:08.082 "admin_qpairs": 0, 00:07:08.082 "io_qpairs": 0, 00:07:08.082 "current_admin_qpairs": 0, 00:07:08.082 "current_io_qpairs": 0, 00:07:08.082 "pending_bdev_io": 0, 00:07:08.082 "completed_nvme_io": 0, 00:07:08.082 "transports": [] 00:07:08.082 } 00:07:08.082 ] 00:07:08.082 }' 00:07:08.082 13:36:10 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:08.082 13:36:10 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:08.082 13:36:10 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:08.082 13:36:10 -- target/rpc.sh@15 -- # wc -l 00:07:08.082 13:36:10 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:08.082 13:36:10 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:08.082 13:36:10 -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:08.082 13:36:10 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:08.082 13:36:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.082 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:07:08.082 [2024-04-18 13:36:10.745481] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.082 13:36:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.082 13:36:10 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:08.082 13:36:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.082 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:07:08.082 13:36:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.082 13:36:10 -- target/rpc.sh@33 -- # stats='{ 00:07:08.082 "tick_rate": 2700000000, 00:07:08.082 "poll_groups": [ 00:07:08.082 { 00:07:08.082 "name": "nvmf_tgt_poll_group_0", 00:07:08.082 "admin_qpairs": 0, 00:07:08.082 "io_qpairs": 0, 00:07:08.082 "current_admin_qpairs": 0, 00:07:08.082 "current_io_qpairs": 0, 00:07:08.082 "pending_bdev_io": 0, 00:07:08.082 "completed_nvme_io": 0, 00:07:08.082 "transports": [ 00:07:08.082 { 00:07:08.082 "trtype": "TCP" 00:07:08.082 } 00:07:08.082 ] 00:07:08.082 }, 00:07:08.082 { 00:07:08.082 "name": "nvmf_tgt_poll_group_1", 00:07:08.082 "admin_qpairs": 0, 00:07:08.082 "io_qpairs": 0, 00:07:08.082 "current_admin_qpairs": 0, 00:07:08.082 "current_io_qpairs": 0, 00:07:08.082 "pending_bdev_io": 0, 00:07:08.082 "completed_nvme_io": 0, 00:07:08.082 "transports": [ 00:07:08.082 { 00:07:08.082 "trtype": "TCP" 00:07:08.082 } 00:07:08.082 ] 00:07:08.082 }, 00:07:08.082 { 00:07:08.082 "name": "nvmf_tgt_poll_group_2", 00:07:08.082 "admin_qpairs": 0, 00:07:08.082 "io_qpairs": 0, 00:07:08.082 "current_admin_qpairs": 0, 00:07:08.082 "current_io_qpairs": 0, 00:07:08.082 "pending_bdev_io": 0, 00:07:08.082 "completed_nvme_io": 0, 00:07:08.082 "transports": [ 00:07:08.082 { 00:07:08.082 "trtype": "TCP" 00:07:08.082 } 00:07:08.082 ] 00:07:08.082 }, 00:07:08.082 { 00:07:08.082 "name": "nvmf_tgt_poll_group_3", 00:07:08.082 "admin_qpairs": 0, 00:07:08.082 "io_qpairs": 0, 00:07:08.082 "current_admin_qpairs": 0, 00:07:08.082 "current_io_qpairs": 0, 00:07:08.082 "pending_bdev_io": 0, 00:07:08.082 "completed_nvme_io": 0, 00:07:08.082 "transports": [ 00:07:08.082 { 00:07:08.082 "trtype": "TCP" 00:07:08.082 } 00:07:08.082 ] 00:07:08.082 } 00:07:08.082 ] 00:07:08.082 }' 00:07:08.082 13:36:10 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:08.082 13:36:10 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:08.082 13:36:10 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:08.082 13:36:10 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:08.082 13:36:10 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:08.082 13:36:10 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:08.082 13:36:10 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:08.082 13:36:10 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:08.082 13:36:10 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:08.082 13:36:10 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:08.082 13:36:10 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:08.082 13:36:10 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:08.082 13:36:10 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:08.082 13:36:10 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:08.082 13:36:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.082 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:07:08.082 Malloc1 00:07:08.082 13:36:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.082 13:36:10 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:08.082 13:36:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.082 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:07:08.082 13:36:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.082 13:36:10 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:08.082 13:36:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.082 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:07:08.082 13:36:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.082 13:36:10 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:08.082 13:36:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.082 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:07:08.342 13:36:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.342 13:36:10 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:08.342 13:36:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.342 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:07:08.342 [2024-04-18 13:36:10.899817] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.342 13:36:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.342 13:36:10 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:07:08.342 13:36:10 -- common/autotest_common.sh@638 -- # local es=0 00:07:08.342 13:36:10 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:07:08.342 13:36:10 -- common/autotest_common.sh@626 -- # local arg=nvme 00:07:08.342 13:36:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:08.342 13:36:10 -- common/autotest_common.sh@630 -- # type -t nvme 00:07:08.342 13:36:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:08.342 13:36:10 -- common/autotest_common.sh@632 -- # type -P nvme 00:07:08.342 13:36:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:08.342 13:36:10 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:07:08.342 13:36:10 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:07:08.342 13:36:10 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:07:08.342 [2024-04-18 13:36:10.922323] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:07:08.342 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:08.342 could not add new controller: failed to write to nvme-fabrics device 00:07:08.342 13:36:10 -- common/autotest_common.sh@641 -- # es=1 00:07:08.342 13:36:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:08.342 13:36:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:08.342 13:36:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:08.342 13:36:10 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:08.342 13:36:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.342 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:07:08.342 13:36:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.342 13:36:10 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:08.910 13:36:11 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:08.910 13:36:11 -- common/autotest_common.sh@1184 -- # local i=0 00:07:08.910 13:36:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:08.910 13:36:11 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:08.910 13:36:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:10.819 13:36:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:10.819 13:36:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:10.819 13:36:13 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:10.819 13:36:13 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:10.819 13:36:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:10.819 13:36:13 -- common/autotest_common.sh@1194 -- # return 0 00:07:10.819 13:36:13 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:10.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.819 13:36:13 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:10.819 13:36:13 -- common/autotest_common.sh@1205 -- # local i=0 00:07:10.819 13:36:13 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:10.819 13:36:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.819 13:36:13 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:10.819 13:36:13 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.819 13:36:13 -- common/autotest_common.sh@1217 -- # return 0 00:07:10.819 13:36:13 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:10.819 13:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:10.819 13:36:13 -- common/autotest_common.sh@10 -- # set +x 00:07:10.819 13:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:10.819 13:36:13 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:10.819 13:36:13 -- common/autotest_common.sh@638 -- # local es=0 00:07:10.819 13:36:13 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:10.819 13:36:13 -- common/autotest_common.sh@626 -- # local arg=nvme 00:07:10.819 13:36:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:10.819 13:36:13 -- common/autotest_common.sh@630 -- # type -t nvme 00:07:10.819 13:36:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:10.819 13:36:13 -- common/autotest_common.sh@632 -- # type -P nvme 00:07:10.819 13:36:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:10.819 13:36:13 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:07:10.819 13:36:13 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:07:10.819 13:36:13 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:10.819 [2024-04-18 13:36:13.591378] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:07:10.819 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:10.819 could not add new controller: failed to write to nvme-fabrics device 00:07:10.819 13:36:13 -- common/autotest_common.sh@641 -- # es=1 00:07:10.819 13:36:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:10.819 13:36:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:10.819 13:36:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:10.819 13:36:13 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:10.819 13:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:10.819 13:36:13 -- common/autotest_common.sh@10 -- # set +x 00:07:10.819 13:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:10.819 13:36:13 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:11.831 13:36:14 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:11.831 13:36:14 -- common/autotest_common.sh@1184 -- # local i=0 00:07:11.831 13:36:14 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:11.831 13:36:14 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:11.831 13:36:14 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:13.734 13:36:16 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:13.734 13:36:16 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:13.734 13:36:16 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:13.734 13:36:16 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:13.734 13:36:16 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:13.734 13:36:16 -- common/autotest_common.sh@1194 -- # return 0 00:07:13.734 13:36:16 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:13.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:13.734 13:36:16 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:13.734 13:36:16 -- common/autotest_common.sh@1205 -- # local i=0 00:07:13.734 13:36:16 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:13.734 13:36:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.734 13:36:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:13.734 13:36:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.734 13:36:16 -- common/autotest_common.sh@1217 -- # return 0 00:07:13.734 13:36:16 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.734 13:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.734 13:36:16 -- common/autotest_common.sh@10 -- # set +x 00:07:13.734 13:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.734 13:36:16 -- target/rpc.sh@81 -- # seq 1 5 00:07:13.734 13:36:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:13.734 13:36:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.734 13:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.734 13:36:16 -- common/autotest_common.sh@10 -- # set +x 00:07:13.734 13:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.734 13:36:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.734 13:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.734 13:36:16 -- common/autotest_common.sh@10 -- # set +x 00:07:13.734 [2024-04-18 13:36:16.334258] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.734 13:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.734 13:36:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:13.734 13:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.734 13:36:16 -- common/autotest_common.sh@10 -- # set +x 00:07:13.734 13:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.734 13:36:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.734 13:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.734 13:36:16 -- common/autotest_common.sh@10 -- # set +x 00:07:13.734 13:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.734 13:36:16 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:14.301 13:36:16 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:14.301 13:36:16 -- common/autotest_common.sh@1184 -- # local i=0 00:07:14.301 13:36:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:14.301 13:36:16 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:14.301 13:36:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:16.206 13:36:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:16.206 13:36:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:16.206 13:36:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:16.206 13:36:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:16.466 13:36:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:16.466 13:36:19 -- common/autotest_common.sh@1194 -- # return 0 00:07:16.466 13:36:19 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:16.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.466 13:36:19 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:16.466 13:36:19 -- common/autotest_common.sh@1205 -- # local i=0 00:07:16.466 13:36:19 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:16.466 13:36:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:16.466 13:36:19 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:16.466 13:36:19 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:16.466 13:36:19 -- common/autotest_common.sh@1217 -- # return 0 00:07:16.466 13:36:19 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:16.466 13:36:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.466 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:07:16.466 13:36:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.466 13:36:19 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:16.466 13:36:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.466 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:07:16.466 13:36:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.466 13:36:19 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:16.466 13:36:19 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:16.466 13:36:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.466 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:07:16.466 13:36:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.466 13:36:19 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.466 13:36:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.466 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:07:16.466 [2024-04-18 13:36:19.110647] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.466 13:36:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.466 13:36:19 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:16.466 13:36:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.466 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:07:16.466 13:36:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.466 13:36:19 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:16.466 13:36:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.466 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:07:16.466 13:36:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.466 13:36:19 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.031 13:36:19 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:17.031 13:36:19 -- common/autotest_common.sh@1184 -- # local i=0 00:07:17.031 13:36:19 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:17.031 13:36:19 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:17.031 13:36:19 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:18.938 13:36:21 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:18.938 13:36:21 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:18.938 13:36:21 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:18.938 13:36:21 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:18.938 13:36:21 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:18.938 13:36:21 -- common/autotest_common.sh@1194 -- # return 0 00:07:18.938 13:36:21 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:19.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.198 13:36:21 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:19.198 13:36:21 -- common/autotest_common.sh@1205 -- # local i=0 00:07:19.198 13:36:21 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:19.198 13:36:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.198 13:36:21 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:19.198 13:36:21 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.198 13:36:21 -- common/autotest_common.sh@1217 -- # return 0 00:07:19.198 13:36:21 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.198 13:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.198 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:07:19.198 13:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.198 13:36:21 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:19.198 13:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.198 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:07:19.198 13:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.198 13:36:21 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:19.198 13:36:21 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:19.198 13:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.198 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:07:19.198 13:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.198 13:36:21 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.198 13:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.198 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:07:19.198 [2024-04-18 13:36:21.872786] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.198 13:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.198 13:36:21 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:19.198 13:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.198 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:07:19.198 13:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.198 13:36:21 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:19.198 13:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.198 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:07:19.198 13:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.198 13:36:21 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:19.768 13:36:22 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:19.768 13:36:22 -- common/autotest_common.sh@1184 -- # local i=0 00:07:19.768 13:36:22 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:19.768 13:36:22 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:19.768 13:36:22 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:21.677 13:36:24 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:21.677 13:36:24 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:21.677 13:36:24 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:21.677 13:36:24 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:21.677 13:36:24 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:21.677 13:36:24 -- common/autotest_common.sh@1194 -- # return 0 00:07:21.677 13:36:24 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:21.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.937 13:36:24 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:21.937 13:36:24 -- common/autotest_common.sh@1205 -- # local i=0 00:07:21.937 13:36:24 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:21.937 13:36:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.937 13:36:24 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:21.937 13:36:24 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.937 13:36:24 -- common/autotest_common.sh@1217 -- # return 0 00:07:21.937 13:36:24 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.937 13:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.937 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:07:21.937 13:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.937 13:36:24 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.937 13:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.937 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:07:21.937 13:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.937 13:36:24 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:21.937 13:36:24 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:21.937 13:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.937 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:07:21.937 13:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.937 13:36:24 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.937 13:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.937 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:07:21.937 [2024-04-18 13:36:24.637171] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.937 13:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.937 13:36:24 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:21.937 13:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.937 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:07:21.937 13:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.937 13:36:24 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:21.937 13:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.937 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:07:21.937 13:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.937 13:36:24 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:22.507 13:36:25 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:22.507 13:36:25 -- common/autotest_common.sh@1184 -- # local i=0 00:07:22.507 13:36:25 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:22.507 13:36:25 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:22.507 13:36:25 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:24.411 13:36:27 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:24.411 13:36:27 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:24.411 13:36:27 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:24.670 13:36:27 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:24.670 13:36:27 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:24.670 13:36:27 -- common/autotest_common.sh@1194 -- # return 0 00:07:24.670 13:36:27 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:24.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.671 13:36:27 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:24.671 13:36:27 -- common/autotest_common.sh@1205 -- # local i=0 00:07:24.671 13:36:27 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:24.671 13:36:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.671 13:36:27 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:24.671 13:36:27 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.671 13:36:27 -- common/autotest_common.sh@1217 -- # return 0 00:07:24.671 13:36:27 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.671 13:36:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.671 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:07:24.671 13:36:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.671 13:36:27 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:24.671 13:36:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.671 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:07:24.671 13:36:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.671 13:36:27 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:24.671 13:36:27 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:24.671 13:36:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.671 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:07:24.671 13:36:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.671 13:36:27 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.671 13:36:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.671 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:07:24.671 [2024-04-18 13:36:27.330641] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.671 13:36:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.671 13:36:27 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:24.671 13:36:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.671 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:07:24.671 13:36:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.671 13:36:27 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:24.671 13:36:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.671 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:07:24.671 13:36:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.671 13:36:27 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:25.240 13:36:27 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:25.240 13:36:27 -- common/autotest_common.sh@1184 -- # local i=0 00:07:25.240 13:36:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:25.240 13:36:27 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:25.240 13:36:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:27.142 13:36:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:27.142 13:36:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:27.142 13:36:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:27.142 13:36:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:27.142 13:36:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:27.142 13:36:29 -- common/autotest_common.sh@1194 -- # return 0 00:07:27.142 13:36:29 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:27.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.401 13:36:30 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:27.401 13:36:30 -- common/autotest_common.sh@1205 -- # local i=0 00:07:27.401 13:36:30 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:27.401 13:36:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.401 13:36:30 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:27.401 13:36:30 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.401 13:36:30 -- common/autotest_common.sh@1217 -- # return 0 00:07:27.401 13:36:30 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.401 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.401 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.401 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.401 13:36:30 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.401 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.401 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.401 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.401 13:36:30 -- target/rpc.sh@99 -- # seq 1 5 00:07:27.401 13:36:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:27.401 13:36:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:27.401 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.401 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.401 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.401 13:36:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.401 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.401 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.401 [2024-04-18 13:36:30.067270] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.401 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.401 13:36:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:27.401 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.401 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.401 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.401 13:36:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:27.401 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.401 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.401 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.401 13:36:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.401 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.401 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.401 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.401 13:36:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.401 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.401 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.401 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.401 13:36:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:27.401 13:36:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:27.401 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.401 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.401 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.401 13:36:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.401 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.401 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.401 [2024-04-18 13:36:30.115272] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.401 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.401 13:36:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:27.401 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.401 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.401 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.401 13:36:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:27.401 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.401 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.401 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.401 13:36:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.401 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.401 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.401 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.401 13:36:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.401 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.401 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.401 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.401 13:36:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:27.401 13:36:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:27.401 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.401 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.401 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.402 13:36:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.402 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.402 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.402 [2024-04-18 13:36:30.163437] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.402 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.402 13:36:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:27.402 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.402 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.402 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.402 13:36:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:27.402 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.402 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.402 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.402 13:36:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.402 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.402 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.402 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.402 13:36:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.402 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.402 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.402 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.402 13:36:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:27.402 13:36:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:27.402 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.402 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.402 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.402 13:36:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.402 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.660 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.660 [2024-04-18 13:36:30.211619] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.660 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.660 13:36:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:27.660 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.660 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.660 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.660 13:36:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:27.660 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.660 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.660 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.660 13:36:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.660 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.660 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.660 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.660 13:36:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.660 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.660 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.660 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.660 13:36:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:27.660 13:36:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:27.660 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.660 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.660 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.660 13:36:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.660 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.660 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.660 [2024-04-18 13:36:30.259796] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.660 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.660 13:36:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:27.660 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.660 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.660 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.660 13:36:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:27.660 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.660 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.660 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.660 13:36:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.660 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.660 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.660 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.660 13:36:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.660 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.660 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.660 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.660 13:36:30 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:27.660 13:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.660 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.660 13:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.660 13:36:30 -- target/rpc.sh@110 -- # stats='{ 00:07:27.660 "tick_rate": 2700000000, 00:07:27.660 "poll_groups": [ 00:07:27.661 { 00:07:27.661 "name": "nvmf_tgt_poll_group_0", 00:07:27.661 "admin_qpairs": 2, 00:07:27.661 "io_qpairs": 84, 00:07:27.661 "current_admin_qpairs": 0, 00:07:27.661 "current_io_qpairs": 0, 00:07:27.661 "pending_bdev_io": 0, 00:07:27.661 "completed_nvme_io": 184, 00:07:27.661 "transports": [ 00:07:27.661 { 00:07:27.661 "trtype": "TCP" 00:07:27.661 } 00:07:27.661 ] 00:07:27.661 }, 00:07:27.661 { 00:07:27.661 "name": "nvmf_tgt_poll_group_1", 00:07:27.661 "admin_qpairs": 2, 00:07:27.661 "io_qpairs": 84, 00:07:27.661 "current_admin_qpairs": 0, 00:07:27.661 "current_io_qpairs": 0, 00:07:27.661 "pending_bdev_io": 0, 00:07:27.661 "completed_nvme_io": 185, 00:07:27.661 "transports": [ 00:07:27.661 { 00:07:27.661 "trtype": "TCP" 00:07:27.661 } 00:07:27.661 ] 00:07:27.661 }, 00:07:27.661 { 00:07:27.661 "name": "nvmf_tgt_poll_group_2", 00:07:27.661 "admin_qpairs": 1, 00:07:27.661 "io_qpairs": 84, 00:07:27.661 "current_admin_qpairs": 0, 00:07:27.661 "current_io_qpairs": 0, 00:07:27.661 "pending_bdev_io": 0, 00:07:27.661 "completed_nvme_io": 232, 00:07:27.661 "transports": [ 00:07:27.661 { 00:07:27.661 "trtype": "TCP" 00:07:27.661 } 00:07:27.661 ] 00:07:27.661 }, 00:07:27.661 { 00:07:27.661 "name": "nvmf_tgt_poll_group_3", 00:07:27.661 "admin_qpairs": 2, 00:07:27.661 "io_qpairs": 84, 00:07:27.661 "current_admin_qpairs": 0, 00:07:27.661 "current_io_qpairs": 0, 00:07:27.661 "pending_bdev_io": 0, 00:07:27.661 "completed_nvme_io": 85, 00:07:27.661 "transports": [ 00:07:27.661 { 00:07:27.661 "trtype": "TCP" 00:07:27.661 } 00:07:27.661 ] 00:07:27.661 } 00:07:27.661 ] 00:07:27.661 }' 00:07:27.661 13:36:30 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:27.661 13:36:30 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:27.661 13:36:30 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:27.661 13:36:30 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:27.661 13:36:30 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:27.661 13:36:30 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:27.661 13:36:30 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:27.661 13:36:30 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:27.661 13:36:30 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:27.661 13:36:30 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:27.661 13:36:30 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:27.661 13:36:30 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:27.661 13:36:30 -- target/rpc.sh@123 -- # nvmftestfini 00:07:27.661 13:36:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:27.661 13:36:30 -- nvmf/common.sh@117 -- # sync 00:07:27.661 13:36:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:27.661 13:36:30 -- nvmf/common.sh@120 -- # set +e 00:07:27.661 13:36:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:27.661 13:36:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:27.661 rmmod nvme_tcp 00:07:27.661 rmmod nvme_fabrics 00:07:27.661 rmmod nvme_keyring 00:07:27.661 13:36:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:27.661 13:36:30 -- nvmf/common.sh@124 -- # set -e 00:07:27.661 13:36:30 -- nvmf/common.sh@125 -- # return 0 00:07:27.661 13:36:30 -- nvmf/common.sh@478 -- # '[' -n 2521871 ']' 00:07:27.661 13:36:30 -- nvmf/common.sh@479 -- # killprocess 2521871 00:07:27.661 13:36:30 -- common/autotest_common.sh@936 -- # '[' -z 2521871 ']' 00:07:27.661 13:36:30 -- common/autotest_common.sh@940 -- # kill -0 2521871 00:07:27.661 13:36:30 -- common/autotest_common.sh@941 -- # uname 00:07:27.661 13:36:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:27.918 13:36:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2521871 00:07:27.918 13:36:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:27.918 13:36:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:27.918 13:36:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2521871' 00:07:27.918 killing process with pid 2521871 00:07:27.918 13:36:30 -- common/autotest_common.sh@955 -- # kill 2521871 00:07:27.918 13:36:30 -- common/autotest_common.sh@960 -- # wait 2521871 00:07:28.178 13:36:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:28.178 13:36:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:28.178 13:36:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:28.178 13:36:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:28.178 13:36:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:28.178 13:36:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.178 13:36:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.178 13:36:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.127 13:36:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:30.127 00:07:30.127 real 0m25.439s 00:07:30.127 user 1m23.116s 00:07:30.127 sys 0m3.722s 00:07:30.127 13:36:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:30.127 13:36:32 -- common/autotest_common.sh@10 -- # set +x 00:07:30.127 ************************************ 00:07:30.127 END TEST nvmf_rpc 00:07:30.127 ************************************ 00:07:30.127 13:36:32 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:30.127 13:36:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:30.127 13:36:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.127 13:36:32 -- common/autotest_common.sh@10 -- # set +x 00:07:30.385 ************************************ 00:07:30.385 START TEST nvmf_invalid 00:07:30.385 ************************************ 00:07:30.385 13:36:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:30.385 * Looking for test storage... 00:07:30.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.385 13:36:33 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.385 13:36:33 -- nvmf/common.sh@7 -- # uname -s 00:07:30.385 13:36:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.385 13:36:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.385 13:36:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.385 13:36:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.385 13:36:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.385 13:36:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.385 13:36:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.385 13:36:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.385 13:36:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.385 13:36:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.385 13:36:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:30.385 13:36:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:30.385 13:36:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.385 13:36:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.385 13:36:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.385 13:36:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.385 13:36:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.385 13:36:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.385 13:36:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.386 13:36:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.386 13:36:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.386 13:36:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.386 13:36:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.386 13:36:33 -- paths/export.sh@5 -- # export PATH 00:07:30.386 13:36:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.386 13:36:33 -- nvmf/common.sh@47 -- # : 0 00:07:30.386 13:36:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.386 13:36:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.386 13:36:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.386 13:36:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.386 13:36:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.386 13:36:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.386 13:36:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.386 13:36:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.386 13:36:33 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:30.386 13:36:33 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.386 13:36:33 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:30.386 13:36:33 -- target/invalid.sh@14 -- # target=foobar 00:07:30.386 13:36:33 -- target/invalid.sh@16 -- # RANDOM=0 00:07:30.386 13:36:33 -- target/invalid.sh@34 -- # nvmftestinit 00:07:30.386 13:36:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:30.386 13:36:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.386 13:36:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:30.386 13:36:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:30.386 13:36:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:30.386 13:36:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.386 13:36:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.386 13:36:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.386 13:36:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:30.386 13:36:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:30.386 13:36:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:30.386 13:36:33 -- common/autotest_common.sh@10 -- # set +x 00:07:32.289 13:36:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:32.289 13:36:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:32.289 13:36:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:32.289 13:36:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:32.289 13:36:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:32.289 13:36:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:32.289 13:36:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:32.289 13:36:35 -- nvmf/common.sh@295 -- # net_devs=() 00:07:32.289 13:36:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:32.289 13:36:35 -- nvmf/common.sh@296 -- # e810=() 00:07:32.289 13:36:35 -- nvmf/common.sh@296 -- # local -ga e810 00:07:32.289 13:36:35 -- nvmf/common.sh@297 -- # x722=() 00:07:32.289 13:36:35 -- nvmf/common.sh@297 -- # local -ga x722 00:07:32.289 13:36:35 -- nvmf/common.sh@298 -- # mlx=() 00:07:32.289 13:36:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:32.289 13:36:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.289 13:36:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.289 13:36:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.289 13:36:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.289 13:36:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.289 13:36:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.289 13:36:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.289 13:36:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.289 13:36:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.289 13:36:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.289 13:36:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.289 13:36:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:32.289 13:36:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:32.289 13:36:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:32.289 13:36:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.289 13:36:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:32.289 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:32.289 13:36:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.289 13:36:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:32.289 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:32.289 13:36:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:32.289 13:36:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.289 13:36:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.289 13:36:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:32.289 13:36:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.289 13:36:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:32.289 Found net devices under 0000:84:00.0: cvl_0_0 00:07:32.289 13:36:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.289 13:36:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.289 13:36:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.289 13:36:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:32.289 13:36:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.289 13:36:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:32.289 Found net devices under 0000:84:00.1: cvl_0_1 00:07:32.289 13:36:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.289 13:36:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:32.289 13:36:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:32.289 13:36:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:32.289 13:36:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:32.289 13:36:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.289 13:36:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.289 13:36:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.289 13:36:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:32.289 13:36:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.289 13:36:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.289 13:36:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:32.289 13:36:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.289 13:36:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.289 13:36:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:32.289 13:36:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:32.289 13:36:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.289 13:36:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.548 13:36:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.548 13:36:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.548 13:36:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:32.548 13:36:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.548 13:36:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.548 13:36:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.548 13:36:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:32.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:07:32.548 00:07:32.548 --- 10.0.0.2 ping statistics --- 00:07:32.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.548 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:07:32.548 13:36:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:07:32.548 00:07:32.548 --- 10.0.0.1 ping statistics --- 00:07:32.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.548 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:07:32.548 13:36:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.548 13:36:35 -- nvmf/common.sh@411 -- # return 0 00:07:32.548 13:36:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:32.548 13:36:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.548 13:36:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:32.548 13:36:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:32.548 13:36:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.548 13:36:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:32.548 13:36:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:32.548 13:36:35 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:32.548 13:36:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:32.548 13:36:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:32.548 13:36:35 -- common/autotest_common.sh@10 -- # set +x 00:07:32.548 13:36:35 -- nvmf/common.sh@470 -- # nvmfpid=2526529 00:07:32.548 13:36:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:32.548 13:36:35 -- nvmf/common.sh@471 -- # waitforlisten 2526529 00:07:32.548 13:36:35 -- common/autotest_common.sh@817 -- # '[' -z 2526529 ']' 00:07:32.548 13:36:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.548 13:36:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:32.548 13:36:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.548 13:36:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:32.548 13:36:35 -- common/autotest_common.sh@10 -- # set +x 00:07:32.548 [2024-04-18 13:36:35.278820] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:07:32.548 [2024-04-18 13:36:35.278916] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.548 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.548 [2024-04-18 13:36:35.344880] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.805 [2024-04-18 13:36:35.455140] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.805 [2024-04-18 13:36:35.455207] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.805 [2024-04-18 13:36:35.455230] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.805 [2024-04-18 13:36:35.455247] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.805 [2024-04-18 13:36:35.455262] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.805 [2024-04-18 13:36:35.455342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.805 [2024-04-18 13:36:35.455405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.805 [2024-04-18 13:36:35.455472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.805 [2024-04-18 13:36:35.455478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.805 13:36:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:32.805 13:36:35 -- common/autotest_common.sh@850 -- # return 0 00:07:32.805 13:36:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:32.805 13:36:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:32.805 13:36:35 -- common/autotest_common.sh@10 -- # set +x 00:07:32.805 13:36:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.805 13:36:35 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:32.805 13:36:35 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22936 00:07:33.062 [2024-04-18 13:36:35.825406] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:33.062 13:36:35 -- target/invalid.sh@40 -- # out='request: 00:07:33.062 { 00:07:33.062 "nqn": "nqn.2016-06.io.spdk:cnode22936", 00:07:33.062 "tgt_name": "foobar", 00:07:33.062 "method": "nvmf_create_subsystem", 00:07:33.062 "req_id": 1 00:07:33.062 } 00:07:33.062 Got JSON-RPC error response 00:07:33.062 response: 00:07:33.062 { 00:07:33.062 "code": -32603, 00:07:33.062 "message": "Unable to find target foobar" 00:07:33.062 }' 00:07:33.062 13:36:35 -- target/invalid.sh@41 -- # [[ request: 00:07:33.062 { 00:07:33.062 "nqn": "nqn.2016-06.io.spdk:cnode22936", 00:07:33.062 "tgt_name": "foobar", 00:07:33.062 "method": "nvmf_create_subsystem", 00:07:33.062 "req_id": 1 00:07:33.062 } 00:07:33.062 Got JSON-RPC error response 00:07:33.062 response: 00:07:33.062 { 00:07:33.062 "code": -32603, 00:07:33.062 "message": "Unable to find target foobar" 00:07:33.062 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:33.062 13:36:35 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:33.062 13:36:35 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8376 00:07:33.319 [2024-04-18 13:36:36.074256] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8376: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:33.319 13:36:36 -- target/invalid.sh@45 -- # out='request: 00:07:33.319 { 00:07:33.319 "nqn": "nqn.2016-06.io.spdk:cnode8376", 00:07:33.319 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:33.319 "method": "nvmf_create_subsystem", 00:07:33.319 "req_id": 1 00:07:33.319 } 00:07:33.319 Got JSON-RPC error response 00:07:33.319 response: 00:07:33.319 { 00:07:33.319 "code": -32602, 00:07:33.319 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:33.319 }' 00:07:33.319 13:36:36 -- target/invalid.sh@46 -- # [[ request: 00:07:33.319 { 00:07:33.319 "nqn": "nqn.2016-06.io.spdk:cnode8376", 00:07:33.319 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:33.319 "method": "nvmf_create_subsystem", 00:07:33.319 "req_id": 1 00:07:33.319 } 00:07:33.319 Got JSON-RPC error response 00:07:33.319 response: 00:07:33.319 { 00:07:33.319 "code": -32602, 00:07:33.319 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:33.319 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:33.319 13:36:36 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:33.319 13:36:36 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24491 00:07:33.577 [2024-04-18 13:36:36.335103] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24491: invalid model number 'SPDK_Controller' 00:07:33.577 13:36:36 -- target/invalid.sh@50 -- # out='request: 00:07:33.577 { 00:07:33.577 "nqn": "nqn.2016-06.io.spdk:cnode24491", 00:07:33.577 "model_number": "SPDK_Controller\u001f", 00:07:33.577 "method": "nvmf_create_subsystem", 00:07:33.577 "req_id": 1 00:07:33.577 } 00:07:33.577 Got JSON-RPC error response 00:07:33.577 response: 00:07:33.577 { 00:07:33.577 "code": -32602, 00:07:33.577 "message": "Invalid MN SPDK_Controller\u001f" 00:07:33.577 }' 00:07:33.577 13:36:36 -- target/invalid.sh@51 -- # [[ request: 00:07:33.577 { 00:07:33.577 "nqn": "nqn.2016-06.io.spdk:cnode24491", 00:07:33.577 "model_number": "SPDK_Controller\u001f", 00:07:33.577 "method": "nvmf_create_subsystem", 00:07:33.577 "req_id": 1 00:07:33.577 } 00:07:33.577 Got JSON-RPC error response 00:07:33.577 response: 00:07:33.577 { 00:07:33.577 "code": -32602, 00:07:33.577 "message": "Invalid MN SPDK_Controller\u001f" 00:07:33.577 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:33.577 13:36:36 -- target/invalid.sh@54 -- # gen_random_s 21 00:07:33.577 13:36:36 -- target/invalid.sh@19 -- # local length=21 ll 00:07:33.577 13:36:36 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:33.577 13:36:36 -- target/invalid.sh@21 -- # local chars 00:07:33.577 13:36:36 -- target/invalid.sh@22 -- # local string 00:07:33.577 13:36:36 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:33.577 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # printf %x 123 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # string+='{' 00:07:33.577 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.577 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # printf %x 44 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # string+=, 00:07:33.577 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.577 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # printf %x 110 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # string+=n 00:07:33.577 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.577 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # printf %x 35 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # string+='#' 00:07:33.577 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.577 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # printf %x 100 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # string+=d 00:07:33.577 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.577 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # printf %x 77 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:33.577 13:36:36 -- target/invalid.sh@25 -- # string+=M 00:07:33.577 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.577 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # printf %x 33 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # string+='!' 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # printf %x 127 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # string+=$'\177' 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # printf %x 93 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # string+=']' 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # printf %x 77 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # string+=M 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # printf %x 58 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # string+=: 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # printf %x 101 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # string+=e 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # printf %x 90 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # string+=Z 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # printf %x 127 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # string+=$'\177' 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # printf %x 34 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # string+='"' 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # printf %x 58 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # string+=: 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # printf %x 63 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # string+='?' 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # printf %x 58 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # string+=: 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # printf %x 126 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # string+='~' 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # printf %x 71 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # string+=G 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # printf %x 120 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:33.835 13:36:36 -- target/invalid.sh@25 -- # string+=x 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:33.835 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:33.835 13:36:36 -- target/invalid.sh@28 -- # [[ { == \- ]] 00:07:33.835 13:36:36 -- target/invalid.sh@31 -- # echo '{,n#dM!]M:eZ":?:~Gx' 00:07:33.835 13:36:36 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '{,n#dM!]M:eZ":?:~Gx' nqn.2016-06.io.spdk:cnode29020 00:07:34.093 [2024-04-18 13:36:36.652226] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29020: invalid serial number '{,n#dM!]M:eZ":?:~Gx' 00:07:34.093 13:36:36 -- target/invalid.sh@54 -- # out='request: 00:07:34.093 { 00:07:34.093 "nqn": "nqn.2016-06.io.spdk:cnode29020", 00:07:34.093 "serial_number": "{,n#dM!\u007f]M:eZ\u007f\":?:~Gx", 00:07:34.093 "method": "nvmf_create_subsystem", 00:07:34.093 "req_id": 1 00:07:34.093 } 00:07:34.093 Got JSON-RPC error response 00:07:34.093 response: 00:07:34.093 { 00:07:34.093 "code": -32602, 00:07:34.093 "message": "Invalid SN {,n#dM!\u007f]M:eZ\u007f\":?:~Gx" 00:07:34.093 }' 00:07:34.093 13:36:36 -- target/invalid.sh@55 -- # [[ request: 00:07:34.093 { 00:07:34.093 "nqn": "nqn.2016-06.io.spdk:cnode29020", 00:07:34.093 "serial_number": "{,n#dM!\u007f]M:eZ\u007f\":?:~Gx", 00:07:34.093 "method": "nvmf_create_subsystem", 00:07:34.093 "req_id": 1 00:07:34.093 } 00:07:34.093 Got JSON-RPC error response 00:07:34.093 response: 00:07:34.093 { 00:07:34.093 "code": -32602, 00:07:34.093 "message": "Invalid SN {,n#dM!\u007f]M:eZ\u007f\":?:~Gx" 00:07:34.093 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:34.093 13:36:36 -- target/invalid.sh@58 -- # gen_random_s 41 00:07:34.093 13:36:36 -- target/invalid.sh@19 -- # local length=41 ll 00:07:34.093 13:36:36 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:34.093 13:36:36 -- target/invalid.sh@21 -- # local chars 00:07:34.093 13:36:36 -- target/invalid.sh@22 -- # local string 00:07:34.093 13:36:36 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:34.093 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.093 13:36:36 -- target/invalid.sh@25 -- # printf %x 74 00:07:34.093 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:34.093 13:36:36 -- target/invalid.sh@25 -- # string+=J 00:07:34.093 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.093 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.093 13:36:36 -- target/invalid.sh@25 -- # printf %x 116 00:07:34.093 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:34.093 13:36:36 -- target/invalid.sh@25 -- # string+=t 00:07:34.093 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.093 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.093 13:36:36 -- target/invalid.sh@25 -- # printf %x 76 00:07:34.093 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:34.093 13:36:36 -- target/invalid.sh@25 -- # string+=L 00:07:34.093 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.093 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.093 13:36:36 -- target/invalid.sh@25 -- # printf %x 73 00:07:34.093 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:34.093 13:36:36 -- target/invalid.sh@25 -- # string+=I 00:07:34.093 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.093 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.093 13:36:36 -- target/invalid.sh@25 -- # printf %x 67 00:07:34.093 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:34.093 13:36:36 -- target/invalid.sh@25 -- # string+=C 00:07:34.093 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 45 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=- 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 69 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=E 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 84 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=T 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 33 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+='!' 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 34 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+='"' 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 125 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+='}' 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 49 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=1 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 74 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=J 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 107 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=k 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 59 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=';' 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 125 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+='}' 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 115 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=s 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 116 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=t 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 75 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=K 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 99 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=c 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 125 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+='}' 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 57 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=9 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 106 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=j 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 109 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=m 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 92 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+='\' 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 45 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=- 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 83 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=S 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 60 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+='<' 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 72 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=H 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 56 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=8 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 90 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+=Z 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 123 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # string+='{' 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.094 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # printf %x 121 00:07:34.094 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x79' 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # string+=y 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # printf %x 83 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # string+=S 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # printf %x 116 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # string+=t 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # printf %x 111 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # string+=o 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # printf %x 36 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # string+='$' 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # printf %x 71 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # string+=G 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # printf %x 84 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # string+=T 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # printf %x 62 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # string+='>' 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # printf %x 102 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # echo -e '\x66' 00:07:34.095 13:36:36 -- target/invalid.sh@25 -- # string+=f 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:34.095 13:36:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:34.095 13:36:36 -- target/invalid.sh@28 -- # [[ J == \- ]] 00:07:34.095 13:36:36 -- target/invalid.sh@31 -- # echo 'JtLIC-ET!"}1Jk;}stKc}9jm\-Sf' 00:07:34.095 13:36:36 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'JtLIC-ET!"}1Jk;}stKc}9jm\-Sf' nqn.2016-06.io.spdk:cnode14866 00:07:34.353 [2024-04-18 13:36:37.033626] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14866: invalid model number 'JtLIC-ET!"}1Jk;}stKc}9jm\-Sf' 00:07:34.353 13:36:37 -- target/invalid.sh@58 -- # out='request: 00:07:34.353 { 00:07:34.353 "nqn": "nqn.2016-06.io.spdk:cnode14866", 00:07:34.353 "model_number": "JtLIC-ET!\"}1Jk;}stKc}9jm\\-Sf", 00:07:34.353 "method": "nvmf_create_subsystem", 00:07:34.353 "req_id": 1 00:07:34.353 } 00:07:34.353 Got JSON-RPC error response 00:07:34.353 response: 00:07:34.353 { 00:07:34.353 "code": -32602, 00:07:34.353 "message": "Invalid MN JtLIC-ET!\"}1Jk;}stKc}9jm\\-Sf" 00:07:34.353 }' 00:07:34.353 13:36:37 -- target/invalid.sh@59 -- # [[ request: 00:07:34.353 { 00:07:34.353 "nqn": "nqn.2016-06.io.spdk:cnode14866", 00:07:34.353 "model_number": "JtLIC-ET!\"}1Jk;}stKc}9jm\\-Sf", 00:07:34.353 "method": "nvmf_create_subsystem", 00:07:34.353 "req_id": 1 00:07:34.353 } 00:07:34.353 Got JSON-RPC error response 00:07:34.353 response: 00:07:34.353 { 00:07:34.353 "code": -32602, 00:07:34.353 "message": "Invalid MN JtLIC-ET!\"}1Jk;}stKc}9jm\\-Sf" 00:07:34.353 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:34.353 13:36:37 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:34.611 [2024-04-18 13:36:37.278553] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.611 13:36:37 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:34.868 13:36:37 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:34.868 13:36:37 -- target/invalid.sh@67 -- # echo '' 00:07:34.868 13:36:37 -- target/invalid.sh@67 -- # head -n 1 00:07:34.868 13:36:37 -- target/invalid.sh@67 -- # IP= 00:07:34.868 13:36:37 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:35.126 [2024-04-18 13:36:37.776202] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:35.126 13:36:37 -- target/invalid.sh@69 -- # out='request: 00:07:35.126 { 00:07:35.126 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:35.126 "listen_address": { 00:07:35.126 "trtype": "tcp", 00:07:35.126 "traddr": "", 00:07:35.126 "trsvcid": "4421" 00:07:35.126 }, 00:07:35.126 "method": "nvmf_subsystem_remove_listener", 00:07:35.126 "req_id": 1 00:07:35.126 } 00:07:35.126 Got JSON-RPC error response 00:07:35.126 response: 00:07:35.126 { 00:07:35.126 "code": -32602, 00:07:35.126 "message": "Invalid parameters" 00:07:35.126 }' 00:07:35.126 13:36:37 -- target/invalid.sh@70 -- # [[ request: 00:07:35.126 { 00:07:35.126 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:35.126 "listen_address": { 00:07:35.126 "trtype": "tcp", 00:07:35.126 "traddr": "", 00:07:35.126 "trsvcid": "4421" 00:07:35.126 }, 00:07:35.126 "method": "nvmf_subsystem_remove_listener", 00:07:35.126 "req_id": 1 00:07:35.126 } 00:07:35.126 Got JSON-RPC error response 00:07:35.126 response: 00:07:35.126 { 00:07:35.126 "code": -32602, 00:07:35.126 "message": "Invalid parameters" 00:07:35.126 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:35.126 13:36:37 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19666 -i 0 00:07:35.383 [2024-04-18 13:36:38.020963] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19666: invalid cntlid range [0-65519] 00:07:35.383 13:36:38 -- target/invalid.sh@73 -- # out='request: 00:07:35.383 { 00:07:35.383 "nqn": "nqn.2016-06.io.spdk:cnode19666", 00:07:35.383 "min_cntlid": 0, 00:07:35.383 "method": "nvmf_create_subsystem", 00:07:35.383 "req_id": 1 00:07:35.383 } 00:07:35.383 Got JSON-RPC error response 00:07:35.383 response: 00:07:35.383 { 00:07:35.383 "code": -32602, 00:07:35.383 "message": "Invalid cntlid range [0-65519]" 00:07:35.383 }' 00:07:35.383 13:36:38 -- target/invalid.sh@74 -- # [[ request: 00:07:35.383 { 00:07:35.383 "nqn": "nqn.2016-06.io.spdk:cnode19666", 00:07:35.383 "min_cntlid": 0, 00:07:35.383 "method": "nvmf_create_subsystem", 00:07:35.383 "req_id": 1 00:07:35.383 } 00:07:35.383 Got JSON-RPC error response 00:07:35.383 response: 00:07:35.383 { 00:07:35.383 "code": -32602, 00:07:35.383 "message": "Invalid cntlid range [0-65519]" 00:07:35.383 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:35.383 13:36:38 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6650 -i 65520 00:07:35.642 [2024-04-18 13:36:38.269814] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6650: invalid cntlid range [65520-65519] 00:07:35.642 13:36:38 -- target/invalid.sh@75 -- # out='request: 00:07:35.642 { 00:07:35.642 "nqn": "nqn.2016-06.io.spdk:cnode6650", 00:07:35.642 "min_cntlid": 65520, 00:07:35.642 "method": "nvmf_create_subsystem", 00:07:35.642 "req_id": 1 00:07:35.642 } 00:07:35.642 Got JSON-RPC error response 00:07:35.642 response: 00:07:35.642 { 00:07:35.642 "code": -32602, 00:07:35.642 "message": "Invalid cntlid range [65520-65519]" 00:07:35.642 }' 00:07:35.642 13:36:38 -- target/invalid.sh@76 -- # [[ request: 00:07:35.642 { 00:07:35.642 "nqn": "nqn.2016-06.io.spdk:cnode6650", 00:07:35.642 "min_cntlid": 65520, 00:07:35.642 "method": "nvmf_create_subsystem", 00:07:35.642 "req_id": 1 00:07:35.642 } 00:07:35.642 Got JSON-RPC error response 00:07:35.642 response: 00:07:35.642 { 00:07:35.642 "code": -32602, 00:07:35.642 "message": "Invalid cntlid range [65520-65519]" 00:07:35.642 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:35.642 13:36:38 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25537 -I 0 00:07:35.899 [2024-04-18 13:36:38.530752] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25537: invalid cntlid range [1-0] 00:07:35.899 13:36:38 -- target/invalid.sh@77 -- # out='request: 00:07:35.899 { 00:07:35.899 "nqn": "nqn.2016-06.io.spdk:cnode25537", 00:07:35.899 "max_cntlid": 0, 00:07:35.899 "method": "nvmf_create_subsystem", 00:07:35.899 "req_id": 1 00:07:35.899 } 00:07:35.899 Got JSON-RPC error response 00:07:35.899 response: 00:07:35.899 { 00:07:35.899 "code": -32602, 00:07:35.899 "message": "Invalid cntlid range [1-0]" 00:07:35.899 }' 00:07:35.899 13:36:38 -- target/invalid.sh@78 -- # [[ request: 00:07:35.899 { 00:07:35.899 "nqn": "nqn.2016-06.io.spdk:cnode25537", 00:07:35.899 "max_cntlid": 0, 00:07:35.899 "method": "nvmf_create_subsystem", 00:07:35.899 "req_id": 1 00:07:35.899 } 00:07:35.899 Got JSON-RPC error response 00:07:35.899 response: 00:07:35.899 { 00:07:35.899 "code": -32602, 00:07:35.899 "message": "Invalid cntlid range [1-0]" 00:07:35.899 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:35.899 13:36:38 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5822 -I 65520 00:07:36.157 [2024-04-18 13:36:38.775535] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5822: invalid cntlid range [1-65520] 00:07:36.157 13:36:38 -- target/invalid.sh@79 -- # out='request: 00:07:36.157 { 00:07:36.157 "nqn": "nqn.2016-06.io.spdk:cnode5822", 00:07:36.157 "max_cntlid": 65520, 00:07:36.157 "method": "nvmf_create_subsystem", 00:07:36.157 "req_id": 1 00:07:36.157 } 00:07:36.157 Got JSON-RPC error response 00:07:36.157 response: 00:07:36.157 { 00:07:36.157 "code": -32602, 00:07:36.157 "message": "Invalid cntlid range [1-65520]" 00:07:36.157 }' 00:07:36.157 13:36:38 -- target/invalid.sh@80 -- # [[ request: 00:07:36.157 { 00:07:36.157 "nqn": "nqn.2016-06.io.spdk:cnode5822", 00:07:36.157 "max_cntlid": 65520, 00:07:36.157 "method": "nvmf_create_subsystem", 00:07:36.157 "req_id": 1 00:07:36.157 } 00:07:36.157 Got JSON-RPC error response 00:07:36.157 response: 00:07:36.157 { 00:07:36.157 "code": -32602, 00:07:36.157 "message": "Invalid cntlid range [1-65520]" 00:07:36.157 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:36.157 13:36:38 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20828 -i 6 -I 5 00:07:36.415 [2024-04-18 13:36:39.008313] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20828: invalid cntlid range [6-5] 00:07:36.415 13:36:39 -- target/invalid.sh@83 -- # out='request: 00:07:36.415 { 00:07:36.415 "nqn": "nqn.2016-06.io.spdk:cnode20828", 00:07:36.415 "min_cntlid": 6, 00:07:36.415 "max_cntlid": 5, 00:07:36.415 "method": "nvmf_create_subsystem", 00:07:36.415 "req_id": 1 00:07:36.415 } 00:07:36.415 Got JSON-RPC error response 00:07:36.415 response: 00:07:36.415 { 00:07:36.415 "code": -32602, 00:07:36.415 "message": "Invalid cntlid range [6-5]" 00:07:36.415 }' 00:07:36.415 13:36:39 -- target/invalid.sh@84 -- # [[ request: 00:07:36.415 { 00:07:36.415 "nqn": "nqn.2016-06.io.spdk:cnode20828", 00:07:36.415 "min_cntlid": 6, 00:07:36.415 "max_cntlid": 5, 00:07:36.415 "method": "nvmf_create_subsystem", 00:07:36.415 "req_id": 1 00:07:36.415 } 00:07:36.415 Got JSON-RPC error response 00:07:36.415 response: 00:07:36.415 { 00:07:36.415 "code": -32602, 00:07:36.415 "message": "Invalid cntlid range [6-5]" 00:07:36.415 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:36.415 13:36:39 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:36.415 13:36:39 -- target/invalid.sh@87 -- # out='request: 00:07:36.415 { 00:07:36.415 "name": "foobar", 00:07:36.415 "method": "nvmf_delete_target", 00:07:36.415 "req_id": 1 00:07:36.415 } 00:07:36.415 Got JSON-RPC error response 00:07:36.415 response: 00:07:36.415 { 00:07:36.415 "code": -32602, 00:07:36.415 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:36.415 }' 00:07:36.415 13:36:39 -- target/invalid.sh@88 -- # [[ request: 00:07:36.415 { 00:07:36.415 "name": "foobar", 00:07:36.415 "method": "nvmf_delete_target", 00:07:36.415 "req_id": 1 00:07:36.415 } 00:07:36.415 Got JSON-RPC error response 00:07:36.415 response: 00:07:36.415 { 00:07:36.415 "code": -32602, 00:07:36.415 "message": "The specified target doesn't exist, cannot delete it." 00:07:36.415 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:36.415 13:36:39 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:36.415 13:36:39 -- target/invalid.sh@91 -- # nvmftestfini 00:07:36.415 13:36:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:36.415 13:36:39 -- nvmf/common.sh@117 -- # sync 00:07:36.415 13:36:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:36.415 13:36:39 -- nvmf/common.sh@120 -- # set +e 00:07:36.415 13:36:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:36.415 13:36:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:36.415 rmmod nvme_tcp 00:07:36.415 rmmod nvme_fabrics 00:07:36.415 rmmod nvme_keyring 00:07:36.415 13:36:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:36.415 13:36:39 -- nvmf/common.sh@124 -- # set -e 00:07:36.415 13:36:39 -- nvmf/common.sh@125 -- # return 0 00:07:36.415 13:36:39 -- nvmf/common.sh@478 -- # '[' -n 2526529 ']' 00:07:36.415 13:36:39 -- nvmf/common.sh@479 -- # killprocess 2526529 00:07:36.415 13:36:39 -- common/autotest_common.sh@936 -- # '[' -z 2526529 ']' 00:07:36.415 13:36:39 -- common/autotest_common.sh@940 -- # kill -0 2526529 00:07:36.415 13:36:39 -- common/autotest_common.sh@941 -- # uname 00:07:36.415 13:36:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:36.415 13:36:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2526529 00:07:36.728 13:36:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:36.728 13:36:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:36.728 13:36:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2526529' 00:07:36.728 killing process with pid 2526529 00:07:36.728 13:36:39 -- common/autotest_common.sh@955 -- # kill 2526529 00:07:36.728 13:36:39 -- common/autotest_common.sh@960 -- # wait 2526529 00:07:36.728 13:36:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:36.728 13:36:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:36.728 13:36:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:36.728 13:36:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:36.728 13:36:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:36.728 13:36:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.728 13:36:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.728 13:36:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.276 13:36:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:39.276 00:07:39.276 real 0m8.582s 00:07:39.276 user 0m19.523s 00:07:39.276 sys 0m2.431s 00:07:39.276 13:36:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:39.276 13:36:41 -- common/autotest_common.sh@10 -- # set +x 00:07:39.276 ************************************ 00:07:39.277 END TEST nvmf_invalid 00:07:39.277 ************************************ 00:07:39.277 13:36:41 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:39.277 13:36:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:39.277 13:36:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.277 13:36:41 -- common/autotest_common.sh@10 -- # set +x 00:07:39.277 ************************************ 00:07:39.277 START TEST nvmf_abort 00:07:39.277 ************************************ 00:07:39.277 13:36:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:39.277 * Looking for test storage... 00:07:39.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.277 13:36:41 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.277 13:36:41 -- nvmf/common.sh@7 -- # uname -s 00:07:39.277 13:36:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.277 13:36:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.277 13:36:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.277 13:36:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.277 13:36:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.277 13:36:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.277 13:36:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.277 13:36:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.277 13:36:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.277 13:36:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.277 13:36:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:39.277 13:36:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:39.277 13:36:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.277 13:36:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.277 13:36:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.277 13:36:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.277 13:36:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.277 13:36:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.277 13:36:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.277 13:36:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.277 13:36:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.277 13:36:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.277 13:36:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.277 13:36:41 -- paths/export.sh@5 -- # export PATH 00:07:39.277 13:36:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.277 13:36:41 -- nvmf/common.sh@47 -- # : 0 00:07:39.277 13:36:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.277 13:36:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.277 13:36:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.277 13:36:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.277 13:36:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.277 13:36:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.277 13:36:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.277 13:36:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.277 13:36:41 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:39.277 13:36:41 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:39.277 13:36:41 -- target/abort.sh@14 -- # nvmftestinit 00:07:39.277 13:36:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:39.277 13:36:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.277 13:36:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:39.277 13:36:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:39.277 13:36:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:39.277 13:36:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.277 13:36:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.277 13:36:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.277 13:36:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:39.277 13:36:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:39.277 13:36:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:39.277 13:36:41 -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 13:36:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:41.182 13:36:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.182 13:36:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.182 13:36:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.182 13:36:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.182 13:36:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.182 13:36:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.182 13:36:43 -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.182 13:36:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.182 13:36:43 -- nvmf/common.sh@296 -- # e810=() 00:07:41.182 13:36:43 -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.182 13:36:43 -- nvmf/common.sh@297 -- # x722=() 00:07:41.182 13:36:43 -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.182 13:36:43 -- nvmf/common.sh@298 -- # mlx=() 00:07:41.182 13:36:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.182 13:36:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.182 13:36:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.182 13:36:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.182 13:36:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.182 13:36:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.182 13:36:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.182 13:36:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.182 13:36:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.182 13:36:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.182 13:36:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.182 13:36:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.182 13:36:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.182 13:36:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.182 13:36:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.182 13:36:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.182 13:36:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:41.182 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:41.182 13:36:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.182 13:36:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:41.182 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:41.182 13:36:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.182 13:36:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.182 13:36:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.182 13:36:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:41.182 13:36:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.182 13:36:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:41.182 Found net devices under 0000:84:00.0: cvl_0_0 00:07:41.182 13:36:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.182 13:36:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.182 13:36:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.182 13:36:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:41.182 13:36:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.182 13:36:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:41.182 Found net devices under 0000:84:00.1: cvl_0_1 00:07:41.182 13:36:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.182 13:36:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:41.182 13:36:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:41.182 13:36:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:41.182 13:36:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.182 13:36:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.182 13:36:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.182 13:36:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.182 13:36:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.182 13:36:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.182 13:36:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.182 13:36:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.182 13:36:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.182 13:36:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.182 13:36:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.182 13:36:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.182 13:36:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.182 13:36:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.182 13:36:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.182 13:36:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.182 13:36:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.182 13:36:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.182 13:36:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.182 13:36:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:07:41.182 00:07:41.182 --- 10.0.0.2 ping statistics --- 00:07:41.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.182 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:07:41.182 13:36:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:07:41.182 00:07:41.182 --- 10.0.0.1 ping statistics --- 00:07:41.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.182 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:07:41.182 13:36:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.182 13:36:43 -- nvmf/common.sh@411 -- # return 0 00:07:41.182 13:36:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:41.182 13:36:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.182 13:36:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:41.182 13:36:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.182 13:36:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:41.182 13:36:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:41.182 13:36:43 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:41.182 13:36:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:41.182 13:36:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:41.182 13:36:43 -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 13:36:43 -- nvmf/common.sh@470 -- # nvmfpid=2529065 00:07:41.182 13:36:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:41.182 13:36:43 -- nvmf/common.sh@471 -- # waitforlisten 2529065 00:07:41.182 13:36:43 -- common/autotest_common.sh@817 -- # '[' -z 2529065 ']' 00:07:41.182 13:36:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.182 13:36:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:41.182 13:36:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.182 13:36:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:41.182 13:36:43 -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 [2024-04-18 13:36:43.955615] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:07:41.182 [2024-04-18 13:36:43.955705] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.440 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.440 [2024-04-18 13:36:44.022495] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.440 [2024-04-18 13:36:44.131101] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.440 [2024-04-18 13:36:44.131183] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.440 [2024-04-18 13:36:44.131199] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.440 [2024-04-18 13:36:44.131211] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.440 [2024-04-18 13:36:44.131228] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.440 [2024-04-18 13:36:44.131314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.440 [2024-04-18 13:36:44.131337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.440 [2024-04-18 13:36:44.131340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.440 13:36:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:41.440 13:36:44 -- common/autotest_common.sh@850 -- # return 0 00:07:41.440 13:36:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:41.440 13:36:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:41.440 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:07:41.700 13:36:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.700 13:36:44 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:41.700 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.700 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:07:41.700 [2024-04-18 13:36:44.268848] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.700 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.700 13:36:44 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:41.700 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.700 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:07:41.700 Malloc0 00:07:41.700 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.700 13:36:44 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:41.700 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.700 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:07:41.700 Delay0 00:07:41.700 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.700 13:36:44 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:41.700 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.700 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:07:41.700 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.700 13:36:44 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:41.700 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.700 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:07:41.700 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.700 13:36:44 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:41.700 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.700 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:07:41.700 [2024-04-18 13:36:44.345084] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.700 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.701 13:36:44 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:41.701 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:41.701 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:07:41.701 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:41.701 13:36:44 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:41.701 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.701 [2024-04-18 13:36:44.440296] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:44.238 Initializing NVMe Controllers 00:07:44.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:44.238 controller IO queue size 128 less than required 00:07:44.238 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:44.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:44.238 Initialization complete. Launching workers. 00:07:44.238 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33414 00:07:44.238 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33475, failed to submit 62 00:07:44.238 success 33418, unsuccess 57, failed 0 00:07:44.238 13:36:46 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:44.238 13:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:44.238 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:07:44.238 13:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:44.238 13:36:46 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:44.238 13:36:46 -- target/abort.sh@38 -- # nvmftestfini 00:07:44.238 13:36:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:44.238 13:36:46 -- nvmf/common.sh@117 -- # sync 00:07:44.238 13:36:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:44.238 13:36:46 -- nvmf/common.sh@120 -- # set +e 00:07:44.238 13:36:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:44.238 13:36:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:44.238 rmmod nvme_tcp 00:07:44.238 rmmod nvme_fabrics 00:07:44.238 rmmod nvme_keyring 00:07:44.238 13:36:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.238 13:36:46 -- nvmf/common.sh@124 -- # set -e 00:07:44.238 13:36:46 -- nvmf/common.sh@125 -- # return 0 00:07:44.238 13:36:46 -- nvmf/common.sh@478 -- # '[' -n 2529065 ']' 00:07:44.238 13:36:46 -- nvmf/common.sh@479 -- # killprocess 2529065 00:07:44.238 13:36:46 -- common/autotest_common.sh@936 -- # '[' -z 2529065 ']' 00:07:44.238 13:36:46 -- common/autotest_common.sh@940 -- # kill -0 2529065 00:07:44.238 13:36:46 -- common/autotest_common.sh@941 -- # uname 00:07:44.238 13:36:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:44.238 13:36:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2529065 00:07:44.238 13:36:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:44.238 13:36:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:44.238 13:36:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2529065' 00:07:44.238 killing process with pid 2529065 00:07:44.238 13:36:46 -- common/autotest_common.sh@955 -- # kill 2529065 00:07:44.238 13:36:46 -- common/autotest_common.sh@960 -- # wait 2529065 00:07:44.238 13:36:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:44.238 13:36:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:44.238 13:36:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:44.238 13:36:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:44.238 13:36:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:44.238 13:36:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.238 13:36:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.238 13:36:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.779 13:36:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:46.779 00:07:46.779 real 0m7.327s 00:07:46.779 user 0m10.546s 00:07:46.779 sys 0m2.583s 00:07:46.779 13:36:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:46.779 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:07:46.779 ************************************ 00:07:46.779 END TEST nvmf_abort 00:07:46.779 ************************************ 00:07:46.779 13:36:49 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:46.779 13:36:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:46.779 13:36:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.779 13:36:49 -- common/autotest_common.sh@10 -- # set +x 00:07:46.779 ************************************ 00:07:46.779 START TEST nvmf_ns_hotplug_stress 00:07:46.779 ************************************ 00:07:46.779 13:36:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:46.779 * Looking for test storage... 00:07:46.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.779 13:36:49 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.779 13:36:49 -- nvmf/common.sh@7 -- # uname -s 00:07:46.779 13:36:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.779 13:36:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.779 13:36:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.779 13:36:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.779 13:36:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.779 13:36:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.779 13:36:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.779 13:36:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.779 13:36:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.779 13:36:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.779 13:36:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:46.779 13:36:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:46.779 13:36:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.779 13:36:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.779 13:36:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.779 13:36:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.779 13:36:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.779 13:36:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.779 13:36:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.779 13:36:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.779 13:36:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.779 13:36:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.779 13:36:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.779 13:36:49 -- paths/export.sh@5 -- # export PATH 00:07:46.779 13:36:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.779 13:36:49 -- nvmf/common.sh@47 -- # : 0 00:07:46.779 13:36:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:46.780 13:36:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:46.780 13:36:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.780 13:36:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.780 13:36:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.780 13:36:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:46.780 13:36:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:46.780 13:36:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:46.780 13:36:49 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.780 13:36:49 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:07:46.780 13:36:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:46.780 13:36:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.780 13:36:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:46.780 13:36:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:46.780 13:36:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:46.780 13:36:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.780 13:36:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.780 13:36:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.780 13:36:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:46.780 13:36:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:46.780 13:36:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:46.780 13:36:49 -- common/autotest_common.sh@10 -- # set +x 00:07:48.687 13:36:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:48.687 13:36:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:48.687 13:36:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:48.687 13:36:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:48.687 13:36:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:48.687 13:36:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:48.687 13:36:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:48.687 13:36:51 -- nvmf/common.sh@295 -- # net_devs=() 00:07:48.687 13:36:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:48.687 13:36:51 -- nvmf/common.sh@296 -- # e810=() 00:07:48.687 13:36:51 -- nvmf/common.sh@296 -- # local -ga e810 00:07:48.687 13:36:51 -- nvmf/common.sh@297 -- # x722=() 00:07:48.687 13:36:51 -- nvmf/common.sh@297 -- # local -ga x722 00:07:48.687 13:36:51 -- nvmf/common.sh@298 -- # mlx=() 00:07:48.687 13:36:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:48.687 13:36:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.687 13:36:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.687 13:36:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.687 13:36:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.687 13:36:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.687 13:36:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.687 13:36:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.687 13:36:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.687 13:36:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.687 13:36:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.687 13:36:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.687 13:36:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:48.687 13:36:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:48.687 13:36:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:48.687 13:36:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:48.687 13:36:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:48.687 13:36:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:48.687 13:36:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.687 13:36:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:48.687 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:48.687 13:36:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.687 13:36:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.687 13:36:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.687 13:36:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.687 13:36:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.687 13:36:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.687 13:36:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:48.687 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:48.687 13:36:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.687 13:36:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.687 13:36:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.687 13:36:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.687 13:36:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.687 13:36:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:48.687 13:36:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:48.687 13:36:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:48.687 13:36:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.687 13:36:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.687 13:36:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:48.687 13:36:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.687 13:36:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:48.687 Found net devices under 0000:84:00.0: cvl_0_0 00:07:48.687 13:36:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.687 13:36:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.687 13:36:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.687 13:36:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:48.687 13:36:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.687 13:36:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:48.687 Found net devices under 0000:84:00.1: cvl_0_1 00:07:48.687 13:36:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.687 13:36:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:48.688 13:36:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:48.688 13:36:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:48.688 13:36:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:48.688 13:36:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:48.688 13:36:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.688 13:36:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.688 13:36:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.688 13:36:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:48.688 13:36:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.688 13:36:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.688 13:36:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:48.688 13:36:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.688 13:36:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.688 13:36:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:48.688 13:36:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:48.688 13:36:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.688 13:36:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.688 13:36:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.688 13:36:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.688 13:36:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:48.688 13:36:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.688 13:36:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.688 13:36:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.688 13:36:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:48.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:07:48.688 00:07:48.688 --- 10.0.0.2 ping statistics --- 00:07:48.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.688 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:07:48.688 13:36:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:07:48.688 00:07:48.688 --- 10.0.0.1 ping statistics --- 00:07:48.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.688 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:07:48.688 13:36:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.688 13:36:51 -- nvmf/common.sh@411 -- # return 0 00:07:48.688 13:36:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:48.688 13:36:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.688 13:36:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:48.688 13:36:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:48.688 13:36:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.688 13:36:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:48.688 13:36:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:48.688 13:36:51 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:07:48.688 13:36:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:48.688 13:36:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:48.688 13:36:51 -- common/autotest_common.sh@10 -- # set +x 00:07:48.688 13:36:51 -- nvmf/common.sh@470 -- # nvmfpid=2531435 00:07:48.688 13:36:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:48.688 13:36:51 -- nvmf/common.sh@471 -- # waitforlisten 2531435 00:07:48.688 13:36:51 -- common/autotest_common.sh@817 -- # '[' -z 2531435 ']' 00:07:48.688 13:36:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.688 13:36:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:48.688 13:36:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.688 13:36:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:48.688 13:36:51 -- common/autotest_common.sh@10 -- # set +x 00:07:48.688 [2024-04-18 13:36:51.415463] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:07:48.688 [2024-04-18 13:36:51.415571] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.688 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.688 [2024-04-18 13:36:51.485333] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.946 [2024-04-18 13:36:51.610323] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.946 [2024-04-18 13:36:51.610393] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.946 [2024-04-18 13:36:51.610410] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.946 [2024-04-18 13:36:51.610424] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.946 [2024-04-18 13:36:51.610436] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.946 [2024-04-18 13:36:51.610499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.946 [2024-04-18 13:36:51.610555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.946 [2024-04-18 13:36:51.610559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.946 13:36:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:48.946 13:36:51 -- common/autotest_common.sh@850 -- # return 0 00:07:48.946 13:36:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:48.946 13:36:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:48.946 13:36:51 -- common/autotest_common.sh@10 -- # set +x 00:07:49.203 13:36:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.203 13:36:51 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:07:49.203 13:36:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:49.203 [2024-04-18 13:36:52.001853] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.460 13:36:52 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:49.460 13:36:52 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.718 [2024-04-18 13:36:52.476759] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.718 13:36:52 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.975 13:36:52 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:50.538 Malloc0 00:07:50.538 13:36:53 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:50.538 Delay0 00:07:50.538 13:36:53 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.794 13:36:53 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:51.051 NULL1 00:07:51.051 13:36:53 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:51.308 13:36:54 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=2531729 00:07:51.308 13:36:54 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:51.308 13:36:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:07:51.308 13:36:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.308 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.678 Read completed with error (sct=0, sc=11) 00:07:52.678 13:36:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.936 13:36:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:07:52.936 13:36:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:53.194 true 00:07:53.194 13:36:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:07:53.194 13:36:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.759 13:36:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.017 13:36:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:07:54.017 13:36:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:54.274 true 00:07:54.274 13:36:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:07:54.274 13:36:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.532 13:36:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.789 13:36:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:07:54.789 13:36:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:55.047 true 00:07:55.047 13:36:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:07:55.047 13:36:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.979 13:36:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.235 13:36:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:07:56.235 13:36:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:56.492 true 00:07:56.492 13:36:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:07:56.492 13:36:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.749 13:36:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.007 13:36:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:07:57.007 13:36:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:57.265 true 00:07:57.265 13:36:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:07:57.265 13:36:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.523 13:37:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.780 13:37:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:07:57.780 13:37:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:58.036 true 00:07:58.036 13:37:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:07:58.036 13:37:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.888 13:37:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.145 13:37:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:07:59.145 13:37:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:59.402 true 00:07:59.402 13:37:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:07:59.402 13:37:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.659 13:37:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.918 13:37:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:07:59.918 13:37:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:00.208 true 00:08:00.208 13:37:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:00.208 13:37:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.164 13:37:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.421 13:37:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:08:01.421 13:37:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:01.678 true 00:08:01.678 13:37:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:01.678 13:37:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.935 13:37:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.193 13:37:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:08:02.193 13:37:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:02.449 true 00:08:02.449 13:37:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:02.449 13:37:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.706 13:37:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.962 13:37:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:08:02.962 13:37:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:03.220 true 00:08:03.220 13:37:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:03.220 13:37:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.590 13:37:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.590 13:37:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:08:04.590 13:37:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:04.846 true 00:08:04.846 13:37:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:04.846 13:37:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.103 13:37:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.360 13:37:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:08:05.360 13:37:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:05.617 true 00:08:05.874 13:37:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:05.874 13:37:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.874 13:37:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.131 13:37:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:08:06.131 13:37:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:06.388 true 00:08:06.388 13:37:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:06.388 13:37:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.758 13:37:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.758 13:37:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:08:07.758 13:37:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:08.015 true 00:08:08.015 13:37:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:08.015 13:37:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.272 13:37:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.528 13:37:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:08:08.528 13:37:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:08.786 true 00:08:08.786 13:37:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:08.786 13:37:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.718 13:37:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.718 13:37:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:08:09.718 13:37:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:09.975 true 00:08:09.975 13:37:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:09.975 13:37:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.231 13:37:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.488 13:37:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:08:10.488 13:37:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:10.745 true 00:08:10.745 13:37:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:10.745 13:37:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.677 13:37:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.241 13:37:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:08:12.241 13:37:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:12.241 true 00:08:12.241 13:37:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:12.241 13:37:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.498 13:37:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.756 13:37:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:08:12.756 13:37:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:13.014 true 00:08:13.014 13:37:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:13.014 13:37:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.946 13:37:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.204 13:37:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:08:14.204 13:37:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:14.461 true 00:08:14.461 13:37:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:14.461 13:37:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.719 13:37:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.976 13:37:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:08:14.976 13:37:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:15.233 true 00:08:15.233 13:37:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:15.233 13:37:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.165 13:37:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.165 13:37:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:08:16.165 13:37:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:16.426 true 00:08:16.426 13:37:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:16.426 13:37:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.683 13:37:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.940 13:37:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:08:16.940 13:37:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:17.197 true 00:08:17.197 13:37:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:17.197 13:37:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.127 13:37:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.384 13:37:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:08:18.384 13:37:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:18.641 true 00:08:18.642 13:37:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:18.642 13:37:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.899 13:37:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.156 13:37:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:08:19.156 13:37:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:19.413 true 00:08:19.413 13:37:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:19.413 13:37:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.670 13:37:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.927 13:37:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:08:19.927 13:37:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:20.185 true 00:08:20.185 13:37:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:20.185 13:37:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.117 13:37:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.374 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.374 13:37:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:08:21.374 13:37:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:21.632 Initializing NVMe Controllers 00:08:21.632 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:21.632 Controller IO queue size 128, less than required. 00:08:21.632 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:21.632 Controller IO queue size 128, less than required. 00:08:21.632 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:21.632 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:21.632 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:21.632 Initialization complete. Launching workers. 00:08:21.632 ======================================================== 00:08:21.632 Latency(us) 00:08:21.632 Device Information : IOPS MiB/s Average min max 00:08:21.632 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 810.93 0.40 77231.96 3071.31 1012595.68 00:08:21.632 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10240.68 5.00 12498.85 2770.93 450763.63 00:08:21.632 ======================================================== 00:08:21.632 Total : 11051.61 5.40 17248.75 2770.93 1012595.68 00:08:21.632 00:08:21.632 true 00:08:21.632 13:37:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2531729 00:08:21.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (2531729) - No such process 00:08:21.632 13:37:24 -- target/ns_hotplug_stress.sh@44 -- # wait 2531729 00:08:21.632 13:37:24 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:08:21.632 13:37:24 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:08:21.632 13:37:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:21.632 13:37:24 -- nvmf/common.sh@117 -- # sync 00:08:21.632 13:37:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:21.632 13:37:24 -- nvmf/common.sh@120 -- # set +e 00:08:21.632 13:37:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:21.632 13:37:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:21.632 rmmod nvme_tcp 00:08:21.632 rmmod nvme_fabrics 00:08:21.632 rmmod nvme_keyring 00:08:21.632 13:37:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:21.632 13:37:24 -- nvmf/common.sh@124 -- # set -e 00:08:21.632 13:37:24 -- nvmf/common.sh@125 -- # return 0 00:08:21.632 13:37:24 -- nvmf/common.sh@478 -- # '[' -n 2531435 ']' 00:08:21.632 13:37:24 -- nvmf/common.sh@479 -- # killprocess 2531435 00:08:21.632 13:37:24 -- common/autotest_common.sh@936 -- # '[' -z 2531435 ']' 00:08:21.632 13:37:24 -- common/autotest_common.sh@940 -- # kill -0 2531435 00:08:21.632 13:37:24 -- common/autotest_common.sh@941 -- # uname 00:08:21.632 13:37:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:21.632 13:37:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2531435 00:08:21.890 13:37:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:21.890 13:37:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:21.890 13:37:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2531435' 00:08:21.890 killing process with pid 2531435 00:08:21.890 13:37:24 -- common/autotest_common.sh@955 -- # kill 2531435 00:08:21.890 13:37:24 -- common/autotest_common.sh@960 -- # wait 2531435 00:08:22.147 13:37:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:22.147 13:37:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:22.147 13:37:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:22.147 13:37:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:22.147 13:37:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:22.147 13:37:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.147 13:37:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.147 13:37:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.047 13:37:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:24.047 00:08:24.047 real 0m37.664s 00:08:24.047 user 2m26.471s 00:08:24.047 sys 0m10.461s 00:08:24.047 13:37:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:24.047 13:37:26 -- common/autotest_common.sh@10 -- # set +x 00:08:24.047 ************************************ 00:08:24.047 END TEST nvmf_ns_hotplug_stress 00:08:24.047 ************************************ 00:08:24.047 13:37:26 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:24.047 13:37:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:24.047 13:37:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.047 13:37:26 -- common/autotest_common.sh@10 -- # set +x 00:08:24.305 ************************************ 00:08:24.305 START TEST nvmf_connect_stress 00:08:24.305 ************************************ 00:08:24.305 13:37:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:24.305 * Looking for test storage... 00:08:24.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.305 13:37:26 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.305 13:37:26 -- nvmf/common.sh@7 -- # uname -s 00:08:24.305 13:37:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.305 13:37:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.305 13:37:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.305 13:37:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.305 13:37:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.305 13:37:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.305 13:37:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.305 13:37:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.305 13:37:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.305 13:37:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.305 13:37:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:24.305 13:37:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:24.305 13:37:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.305 13:37:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.305 13:37:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.305 13:37:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.305 13:37:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.305 13:37:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.305 13:37:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.305 13:37:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.305 13:37:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.305 13:37:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.305 13:37:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.305 13:37:26 -- paths/export.sh@5 -- # export PATH 00:08:24.305 13:37:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.305 13:37:26 -- nvmf/common.sh@47 -- # : 0 00:08:24.305 13:37:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:24.305 13:37:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:24.305 13:37:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.305 13:37:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.305 13:37:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.305 13:37:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:24.305 13:37:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:24.305 13:37:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:24.305 13:37:26 -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:24.305 13:37:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:24.305 13:37:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.305 13:37:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:24.305 13:37:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:24.305 13:37:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:24.305 13:37:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.305 13:37:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.305 13:37:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.305 13:37:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:24.305 13:37:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:24.305 13:37:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:24.305 13:37:26 -- common/autotest_common.sh@10 -- # set +x 00:08:26.832 13:37:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:26.832 13:37:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:26.832 13:37:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:26.832 13:37:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:26.832 13:37:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:26.832 13:37:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:26.832 13:37:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:26.832 13:37:29 -- nvmf/common.sh@295 -- # net_devs=() 00:08:26.832 13:37:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:26.832 13:37:29 -- nvmf/common.sh@296 -- # e810=() 00:08:26.832 13:37:29 -- nvmf/common.sh@296 -- # local -ga e810 00:08:26.832 13:37:29 -- nvmf/common.sh@297 -- # x722=() 00:08:26.832 13:37:29 -- nvmf/common.sh@297 -- # local -ga x722 00:08:26.832 13:37:29 -- nvmf/common.sh@298 -- # mlx=() 00:08:26.832 13:37:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:26.832 13:37:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.832 13:37:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.832 13:37:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.832 13:37:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.832 13:37:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.832 13:37:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.832 13:37:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.832 13:37:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.832 13:37:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.832 13:37:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.832 13:37:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.832 13:37:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:26.832 13:37:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:26.832 13:37:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:26.832 13:37:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.832 13:37:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:26.832 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:26.832 13:37:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.832 13:37:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:26.832 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:26.832 13:37:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:26.832 13:37:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.832 13:37:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.832 13:37:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:26.832 13:37:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.832 13:37:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:26.832 Found net devices under 0000:84:00.0: cvl_0_0 00:08:26.832 13:37:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.832 13:37:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.832 13:37:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.832 13:37:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:26.832 13:37:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.832 13:37:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:26.832 Found net devices under 0000:84:00.1: cvl_0_1 00:08:26.832 13:37:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.832 13:37:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:26.832 13:37:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:26.832 13:37:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:26.832 13:37:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:26.832 13:37:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.832 13:37:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.832 13:37:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.832 13:37:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:26.832 13:37:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.832 13:37:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.832 13:37:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:26.832 13:37:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.832 13:37:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.832 13:37:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:26.832 13:37:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:26.833 13:37:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.833 13:37:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.833 13:37:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.833 13:37:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.833 13:37:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:26.833 13:37:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.833 13:37:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.833 13:37:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.833 13:37:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:26.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:08:26.833 00:08:26.833 --- 10.0.0.2 ping statistics --- 00:08:26.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.833 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:08:26.833 13:37:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:08:26.833 00:08:26.833 --- 10.0.0.1 ping statistics --- 00:08:26.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.833 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:08:26.833 13:37:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.833 13:37:29 -- nvmf/common.sh@411 -- # return 0 00:08:26.833 13:37:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:26.833 13:37:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.833 13:37:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:26.833 13:37:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:26.833 13:37:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.833 13:37:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:26.833 13:37:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:26.833 13:37:29 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:26.833 13:37:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:26.833 13:37:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:26.833 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:08:26.833 13:37:29 -- nvmf/common.sh@470 -- # nvmfpid=2537463 00:08:26.833 13:37:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:26.833 13:37:29 -- nvmf/common.sh@471 -- # waitforlisten 2537463 00:08:26.833 13:37:29 -- common/autotest_common.sh@817 -- # '[' -z 2537463 ']' 00:08:26.833 13:37:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.833 13:37:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:26.833 13:37:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.833 13:37:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:26.833 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:08:26.833 [2024-04-18 13:37:29.230460] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:08:26.833 [2024-04-18 13:37:29.230548] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.833 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.833 [2024-04-18 13:37:29.298664] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:26.833 [2024-04-18 13:37:29.413002] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.833 [2024-04-18 13:37:29.413071] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.833 [2024-04-18 13:37:29.413095] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.833 [2024-04-18 13:37:29.413118] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.833 [2024-04-18 13:37:29.413135] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.833 [2024-04-18 13:37:29.413266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.833 [2024-04-18 13:37:29.413365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.833 [2024-04-18 13:37:29.413374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.766 13:37:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:27.766 13:37:30 -- common/autotest_common.sh@850 -- # return 0 00:08:27.766 13:37:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:27.766 13:37:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:27.766 13:37:30 -- common/autotest_common.sh@10 -- # set +x 00:08:27.766 13:37:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.766 13:37:30 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:27.766 13:37:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.766 13:37:30 -- common/autotest_common.sh@10 -- # set +x 00:08:27.766 [2024-04-18 13:37:30.232271] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.766 13:37:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.766 13:37:30 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:27.766 13:37:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.766 13:37:30 -- common/autotest_common.sh@10 -- # set +x 00:08:27.766 13:37:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.766 13:37:30 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.766 13:37:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.766 13:37:30 -- common/autotest_common.sh@10 -- # set +x 00:08:27.766 [2024-04-18 13:37:30.259308] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.766 13:37:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.766 13:37:30 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:27.766 13:37:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.766 13:37:30 -- common/autotest_common.sh@10 -- # set +x 00:08:27.766 NULL1 00:08:27.766 13:37:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.766 13:37:30 -- target/connect_stress.sh@21 -- # PERF_PID=2537617 00:08:27.766 13:37:30 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:27.766 13:37:30 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:27.766 13:37:30 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # seq 1 20 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:27.766 13:37:30 -- target/connect_stress.sh@28 -- # cat 00:08:27.766 13:37:30 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:27.766 13:37:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.766 13:37:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.766 13:37:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.024 13:37:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:28.024 13:37:30 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:28.024 13:37:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.024 13:37:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:28.024 13:37:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.281 13:37:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:28.281 13:37:30 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:28.281 13:37:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.281 13:37:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:28.281 13:37:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.539 13:37:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:28.539 13:37:31 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:28.539 13:37:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.539 13:37:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:28.539 13:37:31 -- common/autotest_common.sh@10 -- # set +x 00:08:28.797 13:37:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:28.797 13:37:31 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:28.797 13:37:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.797 13:37:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:28.797 13:37:31 -- common/autotest_common.sh@10 -- # set +x 00:08:29.360 13:37:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.360 13:37:31 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:29.360 13:37:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.360 13:37:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.360 13:37:31 -- common/autotest_common.sh@10 -- # set +x 00:08:29.618 13:37:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.618 13:37:32 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:29.618 13:37:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.618 13:37:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.618 13:37:32 -- common/autotest_common.sh@10 -- # set +x 00:08:29.899 13:37:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.899 13:37:32 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:29.899 13:37:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.899 13:37:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.899 13:37:32 -- common/autotest_common.sh@10 -- # set +x 00:08:30.160 13:37:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.160 13:37:32 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:30.160 13:37:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.160 13:37:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.160 13:37:32 -- common/autotest_common.sh@10 -- # set +x 00:08:30.417 13:37:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.417 13:37:33 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:30.417 13:37:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.417 13:37:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.417 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:08:30.981 13:37:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.981 13:37:33 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:30.981 13:37:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.981 13:37:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.981 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:08:31.238 13:37:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.238 13:37:33 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:31.238 13:37:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.238 13:37:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.238 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:08:31.495 13:37:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.495 13:37:34 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:31.495 13:37:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.495 13:37:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.495 13:37:34 -- common/autotest_common.sh@10 -- # set +x 00:08:31.753 13:37:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.753 13:37:34 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:31.753 13:37:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.753 13:37:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.753 13:37:34 -- common/autotest_common.sh@10 -- # set +x 00:08:32.010 13:37:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.010 13:37:34 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:32.010 13:37:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.010 13:37:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.010 13:37:34 -- common/autotest_common.sh@10 -- # set +x 00:08:32.574 13:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.574 13:37:35 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:32.574 13:37:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.574 13:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.574 13:37:35 -- common/autotest_common.sh@10 -- # set +x 00:08:32.831 13:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.831 13:37:35 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:32.831 13:37:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.831 13:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.831 13:37:35 -- common/autotest_common.sh@10 -- # set +x 00:08:33.088 13:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.088 13:37:35 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:33.088 13:37:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.088 13:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.088 13:37:35 -- common/autotest_common.sh@10 -- # set +x 00:08:33.345 13:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.345 13:37:36 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:33.345 13:37:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.345 13:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.345 13:37:36 -- common/autotest_common.sh@10 -- # set +x 00:08:33.910 13:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.910 13:37:36 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:33.910 13:37:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.910 13:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.910 13:37:36 -- common/autotest_common.sh@10 -- # set +x 00:08:34.167 13:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.167 13:37:36 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:34.167 13:37:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.167 13:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.167 13:37:36 -- common/autotest_common.sh@10 -- # set +x 00:08:34.424 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.424 13:37:37 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:34.424 13:37:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.424 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.424 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:34.681 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.681 13:37:37 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:34.681 13:37:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.681 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.681 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:34.938 13:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.938 13:37:37 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:34.938 13:37:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.938 13:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.938 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:08:35.502 13:37:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.502 13:37:38 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:35.502 13:37:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.502 13:37:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.502 13:37:38 -- common/autotest_common.sh@10 -- # set +x 00:08:35.759 13:37:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.759 13:37:38 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:35.759 13:37:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.759 13:37:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.759 13:37:38 -- common/autotest_common.sh@10 -- # set +x 00:08:36.017 13:37:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.017 13:37:38 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:36.017 13:37:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.017 13:37:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.017 13:37:38 -- common/autotest_common.sh@10 -- # set +x 00:08:36.274 13:37:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.274 13:37:38 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:36.274 13:37:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.274 13:37:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.274 13:37:38 -- common/autotest_common.sh@10 -- # set +x 00:08:36.530 13:37:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.531 13:37:39 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:36.531 13:37:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.531 13:37:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.531 13:37:39 -- common/autotest_common.sh@10 -- # set +x 00:08:37.094 13:37:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.094 13:37:39 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:37.094 13:37:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.094 13:37:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.094 13:37:39 -- common/autotest_common.sh@10 -- # set +x 00:08:37.352 13:37:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.352 13:37:39 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:37.352 13:37:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.352 13:37:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.352 13:37:39 -- common/autotest_common.sh@10 -- # set +x 00:08:37.609 13:37:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.609 13:37:40 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:37.609 13:37:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.609 13:37:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.609 13:37:40 -- common/autotest_common.sh@10 -- # set +x 00:08:37.609 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:37.866 13:37:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.866 13:37:40 -- target/connect_stress.sh@34 -- # kill -0 2537617 00:08:37.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2537617) - No such process 00:08:37.866 13:37:40 -- target/connect_stress.sh@38 -- # wait 2537617 00:08:37.866 13:37:40 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:37.866 13:37:40 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:37.866 13:37:40 -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:37.866 13:37:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:37.866 13:37:40 -- nvmf/common.sh@117 -- # sync 00:08:37.866 13:37:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:37.866 13:37:40 -- nvmf/common.sh@120 -- # set +e 00:08:37.866 13:37:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:37.866 13:37:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:37.866 rmmod nvme_tcp 00:08:37.866 rmmod nvme_fabrics 00:08:37.866 rmmod nvme_keyring 00:08:37.867 13:37:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:37.867 13:37:40 -- nvmf/common.sh@124 -- # set -e 00:08:37.867 13:37:40 -- nvmf/common.sh@125 -- # return 0 00:08:37.867 13:37:40 -- nvmf/common.sh@478 -- # '[' -n 2537463 ']' 00:08:37.867 13:37:40 -- nvmf/common.sh@479 -- # killprocess 2537463 00:08:37.867 13:37:40 -- common/autotest_common.sh@936 -- # '[' -z 2537463 ']' 00:08:37.867 13:37:40 -- common/autotest_common.sh@940 -- # kill -0 2537463 00:08:37.867 13:37:40 -- common/autotest_common.sh@941 -- # uname 00:08:37.867 13:37:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:37.867 13:37:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2537463 00:08:37.867 13:37:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:37.867 13:37:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:37.867 13:37:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2537463' 00:08:37.867 killing process with pid 2537463 00:08:37.867 13:37:40 -- common/autotest_common.sh@955 -- # kill 2537463 00:08:37.867 13:37:40 -- common/autotest_common.sh@960 -- # wait 2537463 00:08:38.125 13:37:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:38.125 13:37:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:38.125 13:37:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:38.125 13:37:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.125 13:37:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:38.125 13:37:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.125 13:37:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.125 13:37:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.654 13:37:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:40.654 00:08:40.654 real 0m16.046s 00:08:40.654 user 0m40.220s 00:08:40.654 sys 0m6.242s 00:08:40.654 13:37:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:40.654 13:37:42 -- common/autotest_common.sh@10 -- # set +x 00:08:40.654 ************************************ 00:08:40.654 END TEST nvmf_connect_stress 00:08:40.654 ************************************ 00:08:40.654 13:37:42 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:40.654 13:37:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:40.654 13:37:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.654 13:37:42 -- common/autotest_common.sh@10 -- # set +x 00:08:40.654 ************************************ 00:08:40.654 START TEST nvmf_fused_ordering 00:08:40.654 ************************************ 00:08:40.654 13:37:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:40.654 * Looking for test storage... 00:08:40.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.654 13:37:43 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.654 13:37:43 -- nvmf/common.sh@7 -- # uname -s 00:08:40.654 13:37:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.654 13:37:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.654 13:37:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.654 13:37:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.654 13:37:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.654 13:37:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.654 13:37:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.654 13:37:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.654 13:37:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.654 13:37:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.654 13:37:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:40.654 13:37:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:40.654 13:37:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.654 13:37:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.654 13:37:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.654 13:37:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.654 13:37:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.654 13:37:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.654 13:37:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.654 13:37:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.654 13:37:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.655 13:37:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.655 13:37:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.655 13:37:43 -- paths/export.sh@5 -- # export PATH 00:08:40.655 13:37:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.655 13:37:43 -- nvmf/common.sh@47 -- # : 0 00:08:40.655 13:37:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:40.655 13:37:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:40.655 13:37:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.655 13:37:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.655 13:37:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.655 13:37:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:40.655 13:37:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:40.655 13:37:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:40.655 13:37:43 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:40.655 13:37:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:40.655 13:37:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.655 13:37:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:40.655 13:37:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:40.655 13:37:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:40.655 13:37:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.655 13:37:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.655 13:37:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.655 13:37:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:40.655 13:37:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:40.655 13:37:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:40.655 13:37:43 -- common/autotest_common.sh@10 -- # set +x 00:08:42.555 13:37:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:42.555 13:37:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:42.555 13:37:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:42.555 13:37:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:42.555 13:37:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:42.555 13:37:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:42.555 13:37:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:42.555 13:37:45 -- nvmf/common.sh@295 -- # net_devs=() 00:08:42.555 13:37:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:42.555 13:37:45 -- nvmf/common.sh@296 -- # e810=() 00:08:42.555 13:37:45 -- nvmf/common.sh@296 -- # local -ga e810 00:08:42.555 13:37:45 -- nvmf/common.sh@297 -- # x722=() 00:08:42.555 13:37:45 -- nvmf/common.sh@297 -- # local -ga x722 00:08:42.555 13:37:45 -- nvmf/common.sh@298 -- # mlx=() 00:08:42.555 13:37:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:42.555 13:37:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.555 13:37:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.555 13:37:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.555 13:37:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.555 13:37:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.555 13:37:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.555 13:37:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.555 13:37:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.555 13:37:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.555 13:37:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.555 13:37:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.555 13:37:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:42.555 13:37:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:42.555 13:37:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:42.555 13:37:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:42.555 13:37:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:42.555 13:37:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:42.555 13:37:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.555 13:37:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:42.555 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:42.555 13:37:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.555 13:37:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.555 13:37:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.555 13:37:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.555 13:37:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.555 13:37:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.555 13:37:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:42.555 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:42.555 13:37:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.555 13:37:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.555 13:37:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.555 13:37:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.555 13:37:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.555 13:37:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:42.555 13:37:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:42.555 13:37:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:42.555 13:37:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.555 13:37:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.555 13:37:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:42.555 13:37:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.555 13:37:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:42.555 Found net devices under 0000:84:00.0: cvl_0_0 00:08:42.555 13:37:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.555 13:37:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.555 13:37:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.555 13:37:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:42.555 13:37:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.555 13:37:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:42.555 Found net devices under 0000:84:00.1: cvl_0_1 00:08:42.555 13:37:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.556 13:37:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:42.556 13:37:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:42.556 13:37:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:42.556 13:37:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:42.556 13:37:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:42.556 13:37:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.556 13:37:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.556 13:37:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.556 13:37:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:42.556 13:37:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.556 13:37:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.556 13:37:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:42.556 13:37:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.556 13:37:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.556 13:37:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:42.556 13:37:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:42.556 13:37:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.556 13:37:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.556 13:37:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.556 13:37:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.556 13:37:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:42.556 13:37:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.556 13:37:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.556 13:37:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.556 13:37:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:42.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:08:42.556 00:08:42.556 --- 10.0.0.2 ping statistics --- 00:08:42.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.556 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:08:42.556 13:37:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:08:42.556 00:08:42.556 --- 10.0.0.1 ping statistics --- 00:08:42.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.556 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:08:42.556 13:37:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.556 13:37:45 -- nvmf/common.sh@411 -- # return 0 00:08:42.556 13:37:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:42.556 13:37:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.556 13:37:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:42.556 13:37:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:42.556 13:37:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.556 13:37:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:42.556 13:37:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:42.556 13:37:45 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:42.556 13:37:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:42.556 13:37:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:42.556 13:37:45 -- common/autotest_common.sh@10 -- # set +x 00:08:42.556 13:37:45 -- nvmf/common.sh@470 -- # nvmfpid=2540791 00:08:42.556 13:37:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:42.556 13:37:45 -- nvmf/common.sh@471 -- # waitforlisten 2540791 00:08:42.556 13:37:45 -- common/autotest_common.sh@817 -- # '[' -z 2540791 ']' 00:08:42.556 13:37:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.556 13:37:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:42.556 13:37:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.556 13:37:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:42.556 13:37:45 -- common/autotest_common.sh@10 -- # set +x 00:08:42.556 [2024-04-18 13:37:45.287609] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:08:42.556 [2024-04-18 13:37:45.287689] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.556 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.556 [2024-04-18 13:37:45.355746] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.814 [2024-04-18 13:37:45.470923] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.814 [2024-04-18 13:37:45.470986] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.814 [2024-04-18 13:37:45.471010] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.814 [2024-04-18 13:37:45.471024] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.814 [2024-04-18 13:37:45.471036] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.814 [2024-04-18 13:37:45.471068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.748 13:37:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:43.748 13:37:46 -- common/autotest_common.sh@850 -- # return 0 00:08:43.748 13:37:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:43.748 13:37:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:43.748 13:37:46 -- common/autotest_common.sh@10 -- # set +x 00:08:43.748 13:37:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.748 13:37:46 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:43.748 13:37:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:43.748 13:37:46 -- common/autotest_common.sh@10 -- # set +x 00:08:43.748 [2024-04-18 13:37:46.265816] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.748 13:37:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:43.748 13:37:46 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:43.748 13:37:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:43.748 13:37:46 -- common/autotest_common.sh@10 -- # set +x 00:08:43.748 13:37:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:43.748 13:37:46 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.748 13:37:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:43.748 13:37:46 -- common/autotest_common.sh@10 -- # set +x 00:08:43.748 [2024-04-18 13:37:46.281989] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.748 13:37:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:43.748 13:37:46 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:43.748 13:37:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:43.748 13:37:46 -- common/autotest_common.sh@10 -- # set +x 00:08:43.748 NULL1 00:08:43.748 13:37:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:43.748 13:37:46 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:43.748 13:37:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:43.748 13:37:46 -- common/autotest_common.sh@10 -- # set +x 00:08:43.748 13:37:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:43.748 13:37:46 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:43.748 13:37:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:43.748 13:37:46 -- common/autotest_common.sh@10 -- # set +x 00:08:43.748 13:37:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:43.748 13:37:46 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:43.748 [2024-04-18 13:37:46.327847] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:08:43.748 [2024-04-18 13:37:46.327892] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540944 ] 00:08:43.748 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.314 Attached to nqn.2016-06.io.spdk:cnode1 00:08:44.314 Namespace ID: 1 size: 1GB 00:08:44.314 fused_ordering(0) 00:08:44.314 fused_ordering(1) 00:08:44.314 fused_ordering(2) 00:08:44.314 fused_ordering(3) 00:08:44.314 fused_ordering(4) 00:08:44.314 fused_ordering(5) 00:08:44.314 fused_ordering(6) 00:08:44.314 fused_ordering(7) 00:08:44.314 fused_ordering(8) 00:08:44.314 fused_ordering(9) 00:08:44.314 fused_ordering(10) 00:08:44.314 fused_ordering(11) 00:08:44.314 fused_ordering(12) 00:08:44.314 fused_ordering(13) 00:08:44.314 fused_ordering(14) 00:08:44.314 fused_ordering(15) 00:08:44.314 fused_ordering(16) 00:08:44.314 fused_ordering(17) 00:08:44.314 fused_ordering(18) 00:08:44.314 fused_ordering(19) 00:08:44.314 fused_ordering(20) 00:08:44.314 fused_ordering(21) 00:08:44.314 fused_ordering(22) 00:08:44.314 fused_ordering(23) 00:08:44.314 fused_ordering(24) 00:08:44.314 fused_ordering(25) 00:08:44.314 fused_ordering(26) 00:08:44.314 fused_ordering(27) 00:08:44.314 fused_ordering(28) 00:08:44.314 fused_ordering(29) 00:08:44.314 fused_ordering(30) 00:08:44.314 fused_ordering(31) 00:08:44.314 fused_ordering(32) 00:08:44.314 fused_ordering(33) 00:08:44.314 fused_ordering(34) 00:08:44.314 fused_ordering(35) 00:08:44.314 fused_ordering(36) 00:08:44.314 fused_ordering(37) 00:08:44.314 fused_ordering(38) 00:08:44.314 fused_ordering(39) 00:08:44.314 fused_ordering(40) 00:08:44.314 fused_ordering(41) 00:08:44.314 fused_ordering(42) 00:08:44.314 fused_ordering(43) 00:08:44.314 fused_ordering(44) 00:08:44.314 fused_ordering(45) 00:08:44.314 fused_ordering(46) 00:08:44.314 fused_ordering(47) 00:08:44.314 fused_ordering(48) 00:08:44.314 fused_ordering(49) 00:08:44.314 fused_ordering(50) 00:08:44.314 fused_ordering(51) 00:08:44.314 fused_ordering(52) 00:08:44.314 fused_ordering(53) 00:08:44.314 fused_ordering(54) 00:08:44.314 fused_ordering(55) 00:08:44.314 fused_ordering(56) 00:08:44.314 fused_ordering(57) 00:08:44.314 fused_ordering(58) 00:08:44.314 fused_ordering(59) 00:08:44.314 fused_ordering(60) 00:08:44.314 fused_ordering(61) 00:08:44.314 fused_ordering(62) 00:08:44.314 fused_ordering(63) 00:08:44.314 fused_ordering(64) 00:08:44.314 fused_ordering(65) 00:08:44.314 fused_ordering(66) 00:08:44.314 fused_ordering(67) 00:08:44.314 fused_ordering(68) 00:08:44.314 fused_ordering(69) 00:08:44.314 fused_ordering(70) 00:08:44.314 fused_ordering(71) 00:08:44.314 fused_ordering(72) 00:08:44.314 fused_ordering(73) 00:08:44.314 fused_ordering(74) 00:08:44.314 fused_ordering(75) 00:08:44.314 fused_ordering(76) 00:08:44.314 fused_ordering(77) 00:08:44.314 fused_ordering(78) 00:08:44.314 fused_ordering(79) 00:08:44.314 fused_ordering(80) 00:08:44.314 fused_ordering(81) 00:08:44.314 fused_ordering(82) 00:08:44.314 fused_ordering(83) 00:08:44.314 fused_ordering(84) 00:08:44.314 fused_ordering(85) 00:08:44.314 fused_ordering(86) 00:08:44.314 fused_ordering(87) 00:08:44.314 fused_ordering(88) 00:08:44.314 fused_ordering(89) 00:08:44.314 fused_ordering(90) 00:08:44.314 fused_ordering(91) 00:08:44.314 fused_ordering(92) 00:08:44.314 fused_ordering(93) 00:08:44.315 fused_ordering(94) 00:08:44.315 fused_ordering(95) 00:08:44.315 fused_ordering(96) 00:08:44.315 fused_ordering(97) 00:08:44.315 fused_ordering(98) 00:08:44.315 fused_ordering(99) 00:08:44.315 fused_ordering(100) 00:08:44.315 fused_ordering(101) 00:08:44.315 fused_ordering(102) 00:08:44.315 fused_ordering(103) 00:08:44.315 fused_ordering(104) 00:08:44.315 fused_ordering(105) 00:08:44.315 fused_ordering(106) 00:08:44.315 fused_ordering(107) 00:08:44.315 fused_ordering(108) 00:08:44.315 fused_ordering(109) 00:08:44.315 fused_ordering(110) 00:08:44.315 fused_ordering(111) 00:08:44.315 fused_ordering(112) 00:08:44.315 fused_ordering(113) 00:08:44.315 fused_ordering(114) 00:08:44.315 fused_ordering(115) 00:08:44.315 fused_ordering(116) 00:08:44.315 fused_ordering(117) 00:08:44.315 fused_ordering(118) 00:08:44.315 fused_ordering(119) 00:08:44.315 fused_ordering(120) 00:08:44.315 fused_ordering(121) 00:08:44.315 fused_ordering(122) 00:08:44.315 fused_ordering(123) 00:08:44.315 fused_ordering(124) 00:08:44.315 fused_ordering(125) 00:08:44.315 fused_ordering(126) 00:08:44.315 fused_ordering(127) 00:08:44.315 fused_ordering(128) 00:08:44.315 fused_ordering(129) 00:08:44.315 fused_ordering(130) 00:08:44.315 fused_ordering(131) 00:08:44.315 fused_ordering(132) 00:08:44.315 fused_ordering(133) 00:08:44.315 fused_ordering(134) 00:08:44.315 fused_ordering(135) 00:08:44.315 fused_ordering(136) 00:08:44.315 fused_ordering(137) 00:08:44.315 fused_ordering(138) 00:08:44.315 fused_ordering(139) 00:08:44.315 fused_ordering(140) 00:08:44.315 fused_ordering(141) 00:08:44.315 fused_ordering(142) 00:08:44.315 fused_ordering(143) 00:08:44.315 fused_ordering(144) 00:08:44.315 fused_ordering(145) 00:08:44.315 fused_ordering(146) 00:08:44.315 fused_ordering(147) 00:08:44.315 fused_ordering(148) 00:08:44.315 fused_ordering(149) 00:08:44.315 fused_ordering(150) 00:08:44.315 fused_ordering(151) 00:08:44.315 fused_ordering(152) 00:08:44.315 fused_ordering(153) 00:08:44.315 fused_ordering(154) 00:08:44.315 fused_ordering(155) 00:08:44.315 fused_ordering(156) 00:08:44.315 fused_ordering(157) 00:08:44.315 fused_ordering(158) 00:08:44.315 fused_ordering(159) 00:08:44.315 fused_ordering(160) 00:08:44.315 fused_ordering(161) 00:08:44.315 fused_ordering(162) 00:08:44.315 fused_ordering(163) 00:08:44.315 fused_ordering(164) 00:08:44.315 fused_ordering(165) 00:08:44.315 fused_ordering(166) 00:08:44.315 fused_ordering(167) 00:08:44.315 fused_ordering(168) 00:08:44.315 fused_ordering(169) 00:08:44.315 fused_ordering(170) 00:08:44.315 fused_ordering(171) 00:08:44.315 fused_ordering(172) 00:08:44.315 fused_ordering(173) 00:08:44.315 fused_ordering(174) 00:08:44.315 fused_ordering(175) 00:08:44.315 fused_ordering(176) 00:08:44.315 fused_ordering(177) 00:08:44.315 fused_ordering(178) 00:08:44.315 fused_ordering(179) 00:08:44.315 fused_ordering(180) 00:08:44.315 fused_ordering(181) 00:08:44.315 fused_ordering(182) 00:08:44.315 fused_ordering(183) 00:08:44.315 fused_ordering(184) 00:08:44.315 fused_ordering(185) 00:08:44.315 fused_ordering(186) 00:08:44.315 fused_ordering(187) 00:08:44.315 fused_ordering(188) 00:08:44.315 fused_ordering(189) 00:08:44.315 fused_ordering(190) 00:08:44.315 fused_ordering(191) 00:08:44.315 fused_ordering(192) 00:08:44.315 fused_ordering(193) 00:08:44.315 fused_ordering(194) 00:08:44.315 fused_ordering(195) 00:08:44.315 fused_ordering(196) 00:08:44.315 fused_ordering(197) 00:08:44.315 fused_ordering(198) 00:08:44.315 fused_ordering(199) 00:08:44.315 fused_ordering(200) 00:08:44.315 fused_ordering(201) 00:08:44.315 fused_ordering(202) 00:08:44.315 fused_ordering(203) 00:08:44.315 fused_ordering(204) 00:08:44.315 fused_ordering(205) 00:08:44.573 fused_ordering(206) 00:08:44.573 fused_ordering(207) 00:08:44.573 fused_ordering(208) 00:08:44.573 fused_ordering(209) 00:08:44.573 fused_ordering(210) 00:08:44.573 fused_ordering(211) 00:08:44.573 fused_ordering(212) 00:08:44.573 fused_ordering(213) 00:08:44.573 fused_ordering(214) 00:08:44.573 fused_ordering(215) 00:08:44.573 fused_ordering(216) 00:08:44.573 fused_ordering(217) 00:08:44.573 fused_ordering(218) 00:08:44.573 fused_ordering(219) 00:08:44.573 fused_ordering(220) 00:08:44.573 fused_ordering(221) 00:08:44.573 fused_ordering(222) 00:08:44.573 fused_ordering(223) 00:08:44.573 fused_ordering(224) 00:08:44.573 fused_ordering(225) 00:08:44.573 fused_ordering(226) 00:08:44.573 fused_ordering(227) 00:08:44.573 fused_ordering(228) 00:08:44.573 fused_ordering(229) 00:08:44.573 fused_ordering(230) 00:08:44.573 fused_ordering(231) 00:08:44.573 fused_ordering(232) 00:08:44.573 fused_ordering(233) 00:08:44.573 fused_ordering(234) 00:08:44.573 fused_ordering(235) 00:08:44.573 fused_ordering(236) 00:08:44.573 fused_ordering(237) 00:08:44.573 fused_ordering(238) 00:08:44.573 fused_ordering(239) 00:08:44.573 fused_ordering(240) 00:08:44.573 fused_ordering(241) 00:08:44.573 fused_ordering(242) 00:08:44.573 fused_ordering(243) 00:08:44.573 fused_ordering(244) 00:08:44.573 fused_ordering(245) 00:08:44.573 fused_ordering(246) 00:08:44.573 fused_ordering(247) 00:08:44.573 fused_ordering(248) 00:08:44.573 fused_ordering(249) 00:08:44.573 fused_ordering(250) 00:08:44.573 fused_ordering(251) 00:08:44.573 fused_ordering(252) 00:08:44.573 fused_ordering(253) 00:08:44.573 fused_ordering(254) 00:08:44.573 fused_ordering(255) 00:08:44.573 fused_ordering(256) 00:08:44.573 fused_ordering(257) 00:08:44.573 fused_ordering(258) 00:08:44.573 fused_ordering(259) 00:08:44.573 fused_ordering(260) 00:08:44.573 fused_ordering(261) 00:08:44.573 fused_ordering(262) 00:08:44.573 fused_ordering(263) 00:08:44.573 fused_ordering(264) 00:08:44.573 fused_ordering(265) 00:08:44.573 fused_ordering(266) 00:08:44.573 fused_ordering(267) 00:08:44.573 fused_ordering(268) 00:08:44.573 fused_ordering(269) 00:08:44.573 fused_ordering(270) 00:08:44.573 fused_ordering(271) 00:08:44.573 fused_ordering(272) 00:08:44.573 fused_ordering(273) 00:08:44.573 fused_ordering(274) 00:08:44.573 fused_ordering(275) 00:08:44.573 fused_ordering(276) 00:08:44.573 fused_ordering(277) 00:08:44.573 fused_ordering(278) 00:08:44.573 fused_ordering(279) 00:08:44.573 fused_ordering(280) 00:08:44.573 fused_ordering(281) 00:08:44.573 fused_ordering(282) 00:08:44.573 fused_ordering(283) 00:08:44.573 fused_ordering(284) 00:08:44.573 fused_ordering(285) 00:08:44.573 fused_ordering(286) 00:08:44.573 fused_ordering(287) 00:08:44.573 fused_ordering(288) 00:08:44.573 fused_ordering(289) 00:08:44.573 fused_ordering(290) 00:08:44.573 fused_ordering(291) 00:08:44.573 fused_ordering(292) 00:08:44.573 fused_ordering(293) 00:08:44.573 fused_ordering(294) 00:08:44.573 fused_ordering(295) 00:08:44.573 fused_ordering(296) 00:08:44.573 fused_ordering(297) 00:08:44.573 fused_ordering(298) 00:08:44.573 fused_ordering(299) 00:08:44.573 fused_ordering(300) 00:08:44.573 fused_ordering(301) 00:08:44.573 fused_ordering(302) 00:08:44.573 fused_ordering(303) 00:08:44.573 fused_ordering(304) 00:08:44.573 fused_ordering(305) 00:08:44.573 fused_ordering(306) 00:08:44.573 fused_ordering(307) 00:08:44.573 fused_ordering(308) 00:08:44.573 fused_ordering(309) 00:08:44.573 fused_ordering(310) 00:08:44.573 fused_ordering(311) 00:08:44.573 fused_ordering(312) 00:08:44.573 fused_ordering(313) 00:08:44.573 fused_ordering(314) 00:08:44.573 fused_ordering(315) 00:08:44.573 fused_ordering(316) 00:08:44.573 fused_ordering(317) 00:08:44.573 fused_ordering(318) 00:08:44.573 fused_ordering(319) 00:08:44.573 fused_ordering(320) 00:08:44.573 fused_ordering(321) 00:08:44.573 fused_ordering(322) 00:08:44.573 fused_ordering(323) 00:08:44.573 fused_ordering(324) 00:08:44.573 fused_ordering(325) 00:08:44.573 fused_ordering(326) 00:08:44.574 fused_ordering(327) 00:08:44.574 fused_ordering(328) 00:08:44.574 fused_ordering(329) 00:08:44.574 fused_ordering(330) 00:08:44.574 fused_ordering(331) 00:08:44.574 fused_ordering(332) 00:08:44.574 fused_ordering(333) 00:08:44.574 fused_ordering(334) 00:08:44.574 fused_ordering(335) 00:08:44.574 fused_ordering(336) 00:08:44.574 fused_ordering(337) 00:08:44.574 fused_ordering(338) 00:08:44.574 fused_ordering(339) 00:08:44.574 fused_ordering(340) 00:08:44.574 fused_ordering(341) 00:08:44.574 fused_ordering(342) 00:08:44.574 fused_ordering(343) 00:08:44.574 fused_ordering(344) 00:08:44.574 fused_ordering(345) 00:08:44.574 fused_ordering(346) 00:08:44.574 fused_ordering(347) 00:08:44.574 fused_ordering(348) 00:08:44.574 fused_ordering(349) 00:08:44.574 fused_ordering(350) 00:08:44.574 fused_ordering(351) 00:08:44.574 fused_ordering(352) 00:08:44.574 fused_ordering(353) 00:08:44.574 fused_ordering(354) 00:08:44.574 fused_ordering(355) 00:08:44.574 fused_ordering(356) 00:08:44.574 fused_ordering(357) 00:08:44.574 fused_ordering(358) 00:08:44.574 fused_ordering(359) 00:08:44.574 fused_ordering(360) 00:08:44.574 fused_ordering(361) 00:08:44.574 fused_ordering(362) 00:08:44.574 fused_ordering(363) 00:08:44.574 fused_ordering(364) 00:08:44.574 fused_ordering(365) 00:08:44.574 fused_ordering(366) 00:08:44.574 fused_ordering(367) 00:08:44.574 fused_ordering(368) 00:08:44.574 fused_ordering(369) 00:08:44.574 fused_ordering(370) 00:08:44.574 fused_ordering(371) 00:08:44.574 fused_ordering(372) 00:08:44.574 fused_ordering(373) 00:08:44.574 fused_ordering(374) 00:08:44.574 fused_ordering(375) 00:08:44.574 fused_ordering(376) 00:08:44.574 fused_ordering(377) 00:08:44.574 fused_ordering(378) 00:08:44.574 fused_ordering(379) 00:08:44.574 fused_ordering(380) 00:08:44.574 fused_ordering(381) 00:08:44.574 fused_ordering(382) 00:08:44.574 fused_ordering(383) 00:08:44.574 fused_ordering(384) 00:08:44.574 fused_ordering(385) 00:08:44.574 fused_ordering(386) 00:08:44.574 fused_ordering(387) 00:08:44.574 fused_ordering(388) 00:08:44.574 fused_ordering(389) 00:08:44.574 fused_ordering(390) 00:08:44.574 fused_ordering(391) 00:08:44.574 fused_ordering(392) 00:08:44.574 fused_ordering(393) 00:08:44.574 fused_ordering(394) 00:08:44.574 fused_ordering(395) 00:08:44.574 fused_ordering(396) 00:08:44.574 fused_ordering(397) 00:08:44.574 fused_ordering(398) 00:08:44.574 fused_ordering(399) 00:08:44.574 fused_ordering(400) 00:08:44.574 fused_ordering(401) 00:08:44.574 fused_ordering(402) 00:08:44.574 fused_ordering(403) 00:08:44.574 fused_ordering(404) 00:08:44.574 fused_ordering(405) 00:08:44.574 fused_ordering(406) 00:08:44.574 fused_ordering(407) 00:08:44.574 fused_ordering(408) 00:08:44.574 fused_ordering(409) 00:08:44.574 fused_ordering(410) 00:08:45.189 fused_ordering(411) 00:08:45.189 fused_ordering(412) 00:08:45.189 fused_ordering(413) 00:08:45.189 fused_ordering(414) 00:08:45.189 fused_ordering(415) 00:08:45.189 fused_ordering(416) 00:08:45.189 fused_ordering(417) 00:08:45.189 fused_ordering(418) 00:08:45.189 fused_ordering(419) 00:08:45.189 fused_ordering(420) 00:08:45.189 fused_ordering(421) 00:08:45.189 fused_ordering(422) 00:08:45.189 fused_ordering(423) 00:08:45.189 fused_ordering(424) 00:08:45.189 fused_ordering(425) 00:08:45.189 fused_ordering(426) 00:08:45.189 fused_ordering(427) 00:08:45.189 fused_ordering(428) 00:08:45.189 fused_ordering(429) 00:08:45.189 fused_ordering(430) 00:08:45.189 fused_ordering(431) 00:08:45.189 fused_ordering(432) 00:08:45.189 fused_ordering(433) 00:08:45.189 fused_ordering(434) 00:08:45.189 fused_ordering(435) 00:08:45.189 fused_ordering(436) 00:08:45.189 fused_ordering(437) 00:08:45.189 fused_ordering(438) 00:08:45.189 fused_ordering(439) 00:08:45.189 fused_ordering(440) 00:08:45.189 fused_ordering(441) 00:08:45.189 fused_ordering(442) 00:08:45.189 fused_ordering(443) 00:08:45.189 fused_ordering(444) 00:08:45.189 fused_ordering(445) 00:08:45.189 fused_ordering(446) 00:08:45.189 fused_ordering(447) 00:08:45.189 fused_ordering(448) 00:08:45.189 fused_ordering(449) 00:08:45.189 fused_ordering(450) 00:08:45.189 fused_ordering(451) 00:08:45.189 fused_ordering(452) 00:08:45.189 fused_ordering(453) 00:08:45.189 fused_ordering(454) 00:08:45.189 fused_ordering(455) 00:08:45.189 fused_ordering(456) 00:08:45.189 fused_ordering(457) 00:08:45.189 fused_ordering(458) 00:08:45.189 fused_ordering(459) 00:08:45.189 fused_ordering(460) 00:08:45.189 fused_ordering(461) 00:08:45.189 fused_ordering(462) 00:08:45.189 fused_ordering(463) 00:08:45.189 fused_ordering(464) 00:08:45.190 fused_ordering(465) 00:08:45.190 fused_ordering(466) 00:08:45.190 fused_ordering(467) 00:08:45.190 fused_ordering(468) 00:08:45.190 fused_ordering(469) 00:08:45.190 fused_ordering(470) 00:08:45.190 fused_ordering(471) 00:08:45.190 fused_ordering(472) 00:08:45.190 fused_ordering(473) 00:08:45.190 fused_ordering(474) 00:08:45.190 fused_ordering(475) 00:08:45.190 fused_ordering(476) 00:08:45.190 fused_ordering(477) 00:08:45.190 fused_ordering(478) 00:08:45.190 fused_ordering(479) 00:08:45.190 fused_ordering(480) 00:08:45.190 fused_ordering(481) 00:08:45.190 fused_ordering(482) 00:08:45.190 fused_ordering(483) 00:08:45.190 fused_ordering(484) 00:08:45.190 fused_ordering(485) 00:08:45.190 fused_ordering(486) 00:08:45.190 fused_ordering(487) 00:08:45.190 fused_ordering(488) 00:08:45.190 fused_ordering(489) 00:08:45.190 fused_ordering(490) 00:08:45.190 fused_ordering(491) 00:08:45.190 fused_ordering(492) 00:08:45.190 fused_ordering(493) 00:08:45.190 fused_ordering(494) 00:08:45.190 fused_ordering(495) 00:08:45.190 fused_ordering(496) 00:08:45.190 fused_ordering(497) 00:08:45.190 fused_ordering(498) 00:08:45.190 fused_ordering(499) 00:08:45.190 fused_ordering(500) 00:08:45.190 fused_ordering(501) 00:08:45.190 fused_ordering(502) 00:08:45.190 fused_ordering(503) 00:08:45.190 fused_ordering(504) 00:08:45.190 fused_ordering(505) 00:08:45.190 fused_ordering(506) 00:08:45.190 fused_ordering(507) 00:08:45.190 fused_ordering(508) 00:08:45.190 fused_ordering(509) 00:08:45.190 fused_ordering(510) 00:08:45.190 fused_ordering(511) 00:08:45.190 fused_ordering(512) 00:08:45.190 fused_ordering(513) 00:08:45.190 fused_ordering(514) 00:08:45.190 fused_ordering(515) 00:08:45.190 fused_ordering(516) 00:08:45.190 fused_ordering(517) 00:08:45.190 fused_ordering(518) 00:08:45.190 fused_ordering(519) 00:08:45.190 fused_ordering(520) 00:08:45.190 fused_ordering(521) 00:08:45.190 fused_ordering(522) 00:08:45.190 fused_ordering(523) 00:08:45.190 fused_ordering(524) 00:08:45.190 fused_ordering(525) 00:08:45.190 fused_ordering(526) 00:08:45.190 fused_ordering(527) 00:08:45.190 fused_ordering(528) 00:08:45.190 fused_ordering(529) 00:08:45.190 fused_ordering(530) 00:08:45.190 fused_ordering(531) 00:08:45.190 fused_ordering(532) 00:08:45.190 fused_ordering(533) 00:08:45.190 fused_ordering(534) 00:08:45.190 fused_ordering(535) 00:08:45.190 fused_ordering(536) 00:08:45.190 fused_ordering(537) 00:08:45.190 fused_ordering(538) 00:08:45.190 fused_ordering(539) 00:08:45.190 fused_ordering(540) 00:08:45.190 fused_ordering(541) 00:08:45.190 fused_ordering(542) 00:08:45.190 fused_ordering(543) 00:08:45.190 fused_ordering(544) 00:08:45.190 fused_ordering(545) 00:08:45.190 fused_ordering(546) 00:08:45.190 fused_ordering(547) 00:08:45.190 fused_ordering(548) 00:08:45.190 fused_ordering(549) 00:08:45.190 fused_ordering(550) 00:08:45.190 fused_ordering(551) 00:08:45.190 fused_ordering(552) 00:08:45.190 fused_ordering(553) 00:08:45.190 fused_ordering(554) 00:08:45.190 fused_ordering(555) 00:08:45.190 fused_ordering(556) 00:08:45.190 fused_ordering(557) 00:08:45.190 fused_ordering(558) 00:08:45.190 fused_ordering(559) 00:08:45.190 fused_ordering(560) 00:08:45.190 fused_ordering(561) 00:08:45.190 fused_ordering(562) 00:08:45.190 fused_ordering(563) 00:08:45.190 fused_ordering(564) 00:08:45.190 fused_ordering(565) 00:08:45.190 fused_ordering(566) 00:08:45.190 fused_ordering(567) 00:08:45.190 fused_ordering(568) 00:08:45.190 fused_ordering(569) 00:08:45.190 fused_ordering(570) 00:08:45.190 fused_ordering(571) 00:08:45.190 fused_ordering(572) 00:08:45.190 fused_ordering(573) 00:08:45.190 fused_ordering(574) 00:08:45.190 fused_ordering(575) 00:08:45.190 fused_ordering(576) 00:08:45.190 fused_ordering(577) 00:08:45.190 fused_ordering(578) 00:08:45.190 fused_ordering(579) 00:08:45.190 fused_ordering(580) 00:08:45.190 fused_ordering(581) 00:08:45.190 fused_ordering(582) 00:08:45.190 fused_ordering(583) 00:08:45.190 fused_ordering(584) 00:08:45.190 fused_ordering(585) 00:08:45.190 fused_ordering(586) 00:08:45.190 fused_ordering(587) 00:08:45.190 fused_ordering(588) 00:08:45.190 fused_ordering(589) 00:08:45.190 fused_ordering(590) 00:08:45.190 fused_ordering(591) 00:08:45.190 fused_ordering(592) 00:08:45.190 fused_ordering(593) 00:08:45.190 fused_ordering(594) 00:08:45.190 fused_ordering(595) 00:08:45.190 fused_ordering(596) 00:08:45.190 fused_ordering(597) 00:08:45.190 fused_ordering(598) 00:08:45.190 fused_ordering(599) 00:08:45.190 fused_ordering(600) 00:08:45.190 fused_ordering(601) 00:08:45.190 fused_ordering(602) 00:08:45.190 fused_ordering(603) 00:08:45.190 fused_ordering(604) 00:08:45.190 fused_ordering(605) 00:08:45.190 fused_ordering(606) 00:08:45.190 fused_ordering(607) 00:08:45.190 fused_ordering(608) 00:08:45.190 fused_ordering(609) 00:08:45.190 fused_ordering(610) 00:08:45.190 fused_ordering(611) 00:08:45.190 fused_ordering(612) 00:08:45.190 fused_ordering(613) 00:08:45.190 fused_ordering(614) 00:08:45.190 fused_ordering(615) 00:08:45.756 fused_ordering(616) 00:08:45.756 fused_ordering(617) 00:08:45.756 fused_ordering(618) 00:08:45.756 fused_ordering(619) 00:08:45.756 fused_ordering(620) 00:08:45.756 fused_ordering(621) 00:08:45.756 fused_ordering(622) 00:08:45.756 fused_ordering(623) 00:08:45.756 fused_ordering(624) 00:08:45.756 fused_ordering(625) 00:08:45.756 fused_ordering(626) 00:08:45.756 fused_ordering(627) 00:08:45.756 fused_ordering(628) 00:08:45.756 fused_ordering(629) 00:08:45.756 fused_ordering(630) 00:08:45.756 fused_ordering(631) 00:08:45.756 fused_ordering(632) 00:08:45.756 fused_ordering(633) 00:08:45.756 fused_ordering(634) 00:08:45.756 fused_ordering(635) 00:08:45.756 fused_ordering(636) 00:08:45.756 fused_ordering(637) 00:08:45.756 fused_ordering(638) 00:08:45.756 fused_ordering(639) 00:08:45.756 fused_ordering(640) 00:08:45.756 fused_ordering(641) 00:08:45.756 fused_ordering(642) 00:08:45.756 fused_ordering(643) 00:08:45.756 fused_ordering(644) 00:08:45.756 fused_ordering(645) 00:08:45.756 fused_ordering(646) 00:08:45.756 fused_ordering(647) 00:08:45.756 fused_ordering(648) 00:08:45.756 fused_ordering(649) 00:08:45.756 fused_ordering(650) 00:08:45.756 fused_ordering(651) 00:08:45.756 fused_ordering(652) 00:08:45.756 fused_ordering(653) 00:08:45.756 fused_ordering(654) 00:08:45.756 fused_ordering(655) 00:08:45.757 fused_ordering(656) 00:08:45.757 fused_ordering(657) 00:08:45.757 fused_ordering(658) 00:08:45.757 fused_ordering(659) 00:08:45.757 fused_ordering(660) 00:08:45.757 fused_ordering(661) 00:08:45.757 fused_ordering(662) 00:08:45.757 fused_ordering(663) 00:08:45.757 fused_ordering(664) 00:08:45.757 fused_ordering(665) 00:08:45.757 fused_ordering(666) 00:08:45.757 fused_ordering(667) 00:08:45.757 fused_ordering(668) 00:08:45.757 fused_ordering(669) 00:08:45.757 fused_ordering(670) 00:08:45.757 fused_ordering(671) 00:08:45.757 fused_ordering(672) 00:08:45.757 fused_ordering(673) 00:08:45.757 fused_ordering(674) 00:08:45.757 fused_ordering(675) 00:08:45.757 fused_ordering(676) 00:08:45.757 fused_ordering(677) 00:08:45.757 fused_ordering(678) 00:08:45.757 fused_ordering(679) 00:08:45.757 fused_ordering(680) 00:08:45.757 fused_ordering(681) 00:08:45.757 fused_ordering(682) 00:08:45.757 fused_ordering(683) 00:08:45.757 fused_ordering(684) 00:08:45.757 fused_ordering(685) 00:08:45.757 fused_ordering(686) 00:08:45.757 fused_ordering(687) 00:08:45.757 fused_ordering(688) 00:08:45.757 fused_ordering(689) 00:08:45.757 fused_ordering(690) 00:08:45.757 fused_ordering(691) 00:08:45.757 fused_ordering(692) 00:08:45.757 fused_ordering(693) 00:08:45.757 fused_ordering(694) 00:08:45.757 fused_ordering(695) 00:08:45.757 fused_ordering(696) 00:08:45.757 fused_ordering(697) 00:08:45.757 fused_ordering(698) 00:08:45.757 fused_ordering(699) 00:08:45.757 fused_ordering(700) 00:08:45.757 fused_ordering(701) 00:08:45.757 fused_ordering(702) 00:08:45.757 fused_ordering(703) 00:08:45.757 fused_ordering(704) 00:08:45.757 fused_ordering(705) 00:08:45.757 fused_ordering(706) 00:08:45.757 fused_ordering(707) 00:08:45.757 fused_ordering(708) 00:08:45.757 fused_ordering(709) 00:08:45.757 fused_ordering(710) 00:08:45.757 fused_ordering(711) 00:08:45.757 fused_ordering(712) 00:08:45.757 fused_ordering(713) 00:08:45.757 fused_ordering(714) 00:08:45.757 fused_ordering(715) 00:08:45.757 fused_ordering(716) 00:08:45.757 fused_ordering(717) 00:08:45.757 fused_ordering(718) 00:08:45.757 fused_ordering(719) 00:08:45.757 fused_ordering(720) 00:08:45.757 fused_ordering(721) 00:08:45.757 fused_ordering(722) 00:08:45.757 fused_ordering(723) 00:08:45.757 fused_ordering(724) 00:08:45.757 fused_ordering(725) 00:08:45.757 fused_ordering(726) 00:08:45.757 fused_ordering(727) 00:08:45.757 fused_ordering(728) 00:08:45.757 fused_ordering(729) 00:08:45.757 fused_ordering(730) 00:08:45.757 fused_ordering(731) 00:08:45.757 fused_ordering(732) 00:08:45.757 fused_ordering(733) 00:08:45.757 fused_ordering(734) 00:08:45.757 fused_ordering(735) 00:08:45.757 fused_ordering(736) 00:08:45.757 fused_ordering(737) 00:08:45.757 fused_ordering(738) 00:08:45.757 fused_ordering(739) 00:08:45.757 fused_ordering(740) 00:08:45.757 fused_ordering(741) 00:08:45.757 fused_ordering(742) 00:08:45.757 fused_ordering(743) 00:08:45.757 fused_ordering(744) 00:08:45.757 fused_ordering(745) 00:08:45.757 fused_ordering(746) 00:08:45.757 fused_ordering(747) 00:08:45.757 fused_ordering(748) 00:08:45.757 fused_ordering(749) 00:08:45.757 fused_ordering(750) 00:08:45.757 fused_ordering(751) 00:08:45.757 fused_ordering(752) 00:08:45.757 fused_ordering(753) 00:08:45.757 fused_ordering(754) 00:08:45.757 fused_ordering(755) 00:08:45.757 fused_ordering(756) 00:08:45.757 fused_ordering(757) 00:08:45.757 fused_ordering(758) 00:08:45.757 fused_ordering(759) 00:08:45.757 fused_ordering(760) 00:08:45.757 fused_ordering(761) 00:08:45.757 fused_ordering(762) 00:08:45.757 fused_ordering(763) 00:08:45.757 fused_ordering(764) 00:08:45.757 fused_ordering(765) 00:08:45.757 fused_ordering(766) 00:08:45.757 fused_ordering(767) 00:08:45.757 fused_ordering(768) 00:08:45.757 fused_ordering(769) 00:08:45.757 fused_ordering(770) 00:08:45.757 fused_ordering(771) 00:08:45.757 fused_ordering(772) 00:08:45.757 fused_ordering(773) 00:08:45.757 fused_ordering(774) 00:08:45.757 fused_ordering(775) 00:08:45.757 fused_ordering(776) 00:08:45.757 fused_ordering(777) 00:08:45.757 fused_ordering(778) 00:08:45.757 fused_ordering(779) 00:08:45.757 fused_ordering(780) 00:08:45.757 fused_ordering(781) 00:08:45.757 fused_ordering(782) 00:08:45.757 fused_ordering(783) 00:08:45.757 fused_ordering(784) 00:08:45.757 fused_ordering(785) 00:08:45.757 fused_ordering(786) 00:08:45.757 fused_ordering(787) 00:08:45.757 fused_ordering(788) 00:08:45.757 fused_ordering(789) 00:08:45.757 fused_ordering(790) 00:08:45.757 fused_ordering(791) 00:08:45.757 fused_ordering(792) 00:08:45.757 fused_ordering(793) 00:08:45.757 fused_ordering(794) 00:08:45.757 fused_ordering(795) 00:08:45.757 fused_ordering(796) 00:08:45.757 fused_ordering(797) 00:08:45.757 fused_ordering(798) 00:08:45.757 fused_ordering(799) 00:08:45.757 fused_ordering(800) 00:08:45.757 fused_ordering(801) 00:08:45.757 fused_ordering(802) 00:08:45.757 fused_ordering(803) 00:08:45.757 fused_ordering(804) 00:08:45.757 fused_ordering(805) 00:08:45.757 fused_ordering(806) 00:08:45.757 fused_ordering(807) 00:08:45.757 fused_ordering(808) 00:08:45.757 fused_ordering(809) 00:08:45.757 fused_ordering(810) 00:08:45.757 fused_ordering(811) 00:08:45.757 fused_ordering(812) 00:08:45.757 fused_ordering(813) 00:08:45.757 fused_ordering(814) 00:08:45.757 fused_ordering(815) 00:08:45.757 fused_ordering(816) 00:08:45.757 fused_ordering(817) 00:08:45.757 fused_ordering(818) 00:08:45.757 fused_ordering(819) 00:08:45.757 fused_ordering(820) 00:08:46.690 fused_ordering(821) 00:08:46.690 fused_ordering(822) 00:08:46.690 fused_ordering(823) 00:08:46.690 fused_ordering(824) 00:08:46.690 fused_ordering(825) 00:08:46.690 fused_ordering(826) 00:08:46.690 fused_ordering(827) 00:08:46.690 fused_ordering(828) 00:08:46.690 fused_ordering(829) 00:08:46.690 fused_ordering(830) 00:08:46.690 fused_ordering(831) 00:08:46.690 fused_ordering(832) 00:08:46.690 fused_ordering(833) 00:08:46.690 fused_ordering(834) 00:08:46.690 fused_ordering(835) 00:08:46.690 fused_ordering(836) 00:08:46.690 fused_ordering(837) 00:08:46.690 fused_ordering(838) 00:08:46.690 fused_ordering(839) 00:08:46.690 fused_ordering(840) 00:08:46.690 fused_ordering(841) 00:08:46.690 fused_ordering(842) 00:08:46.690 fused_ordering(843) 00:08:46.690 fused_ordering(844) 00:08:46.690 fused_ordering(845) 00:08:46.690 fused_ordering(846) 00:08:46.690 fused_ordering(847) 00:08:46.690 fused_ordering(848) 00:08:46.690 fused_ordering(849) 00:08:46.690 fused_ordering(850) 00:08:46.690 fused_ordering(851) 00:08:46.690 fused_ordering(852) 00:08:46.690 fused_ordering(853) 00:08:46.690 fused_ordering(854) 00:08:46.690 fused_ordering(855) 00:08:46.690 fused_ordering(856) 00:08:46.690 fused_ordering(857) 00:08:46.690 fused_ordering(858) 00:08:46.690 fused_ordering(859) 00:08:46.690 fused_ordering(860) 00:08:46.690 fused_ordering(861) 00:08:46.690 fused_ordering(862) 00:08:46.690 fused_ordering(863) 00:08:46.690 fused_ordering(864) 00:08:46.690 fused_ordering(865) 00:08:46.690 fused_ordering(866) 00:08:46.690 fused_ordering(867) 00:08:46.690 fused_ordering(868) 00:08:46.690 fused_ordering(869) 00:08:46.690 fused_ordering(870) 00:08:46.690 fused_ordering(871) 00:08:46.690 fused_ordering(872) 00:08:46.690 fused_ordering(873) 00:08:46.690 fused_ordering(874) 00:08:46.690 fused_ordering(875) 00:08:46.690 fused_ordering(876) 00:08:46.690 fused_ordering(877) 00:08:46.690 fused_ordering(878) 00:08:46.691 fused_ordering(879) 00:08:46.691 fused_ordering(880) 00:08:46.691 fused_ordering(881) 00:08:46.691 fused_ordering(882) 00:08:46.691 fused_ordering(883) 00:08:46.691 fused_ordering(884) 00:08:46.691 fused_ordering(885) 00:08:46.691 fused_ordering(886) 00:08:46.691 fused_ordering(887) 00:08:46.691 fused_ordering(888) 00:08:46.691 fused_ordering(889) 00:08:46.691 fused_ordering(890) 00:08:46.691 fused_ordering(891) 00:08:46.691 fused_ordering(892) 00:08:46.691 fused_ordering(893) 00:08:46.691 fused_ordering(894) 00:08:46.691 fused_ordering(895) 00:08:46.691 fused_ordering(896) 00:08:46.691 fused_ordering(897) 00:08:46.691 fused_ordering(898) 00:08:46.691 fused_ordering(899) 00:08:46.691 fused_ordering(900) 00:08:46.691 fused_ordering(901) 00:08:46.691 fused_ordering(902) 00:08:46.691 fused_ordering(903) 00:08:46.691 fused_ordering(904) 00:08:46.691 fused_ordering(905) 00:08:46.691 fused_ordering(906) 00:08:46.691 fused_ordering(907) 00:08:46.691 fused_ordering(908) 00:08:46.691 fused_ordering(909) 00:08:46.691 fused_ordering(910) 00:08:46.691 fused_ordering(911) 00:08:46.691 fused_ordering(912) 00:08:46.691 fused_ordering(913) 00:08:46.691 fused_ordering(914) 00:08:46.691 fused_ordering(915) 00:08:46.691 fused_ordering(916) 00:08:46.691 fused_ordering(917) 00:08:46.691 fused_ordering(918) 00:08:46.691 fused_ordering(919) 00:08:46.691 fused_ordering(920) 00:08:46.691 fused_ordering(921) 00:08:46.691 fused_ordering(922) 00:08:46.691 fused_ordering(923) 00:08:46.691 fused_ordering(924) 00:08:46.691 fused_ordering(925) 00:08:46.691 fused_ordering(926) 00:08:46.691 fused_ordering(927) 00:08:46.691 fused_ordering(928) 00:08:46.691 fused_ordering(929) 00:08:46.691 fused_ordering(930) 00:08:46.691 fused_ordering(931) 00:08:46.691 fused_ordering(932) 00:08:46.691 fused_ordering(933) 00:08:46.691 fused_ordering(934) 00:08:46.691 fused_ordering(935) 00:08:46.691 fused_ordering(936) 00:08:46.691 fused_ordering(937) 00:08:46.691 fused_ordering(938) 00:08:46.691 fused_ordering(939) 00:08:46.691 fused_ordering(940) 00:08:46.691 fused_ordering(941) 00:08:46.691 fused_ordering(942) 00:08:46.691 fused_ordering(943) 00:08:46.691 fused_ordering(944) 00:08:46.691 fused_ordering(945) 00:08:46.691 fused_ordering(946) 00:08:46.691 fused_ordering(947) 00:08:46.691 fused_ordering(948) 00:08:46.691 fused_ordering(949) 00:08:46.691 fused_ordering(950) 00:08:46.691 fused_ordering(951) 00:08:46.691 fused_ordering(952) 00:08:46.691 fused_ordering(953) 00:08:46.691 fused_ordering(954) 00:08:46.691 fused_ordering(955) 00:08:46.691 fused_ordering(956) 00:08:46.691 fused_ordering(957) 00:08:46.691 fused_ordering(958) 00:08:46.691 fused_ordering(959) 00:08:46.691 fused_ordering(960) 00:08:46.691 fused_ordering(961) 00:08:46.691 fused_ordering(962) 00:08:46.691 fused_ordering(963) 00:08:46.691 fused_ordering(964) 00:08:46.691 fused_ordering(965) 00:08:46.691 fused_ordering(966) 00:08:46.691 fused_ordering(967) 00:08:46.691 fused_ordering(968) 00:08:46.691 fused_ordering(969) 00:08:46.691 fused_ordering(970) 00:08:46.691 fused_ordering(971) 00:08:46.691 fused_ordering(972) 00:08:46.691 fused_ordering(973) 00:08:46.691 fused_ordering(974) 00:08:46.691 fused_ordering(975) 00:08:46.691 fused_ordering(976) 00:08:46.691 fused_ordering(977) 00:08:46.691 fused_ordering(978) 00:08:46.691 fused_ordering(979) 00:08:46.691 fused_ordering(980) 00:08:46.691 fused_ordering(981) 00:08:46.691 fused_ordering(982) 00:08:46.691 fused_ordering(983) 00:08:46.691 fused_ordering(984) 00:08:46.691 fused_ordering(985) 00:08:46.691 fused_ordering(986) 00:08:46.691 fused_ordering(987) 00:08:46.691 fused_ordering(988) 00:08:46.691 fused_ordering(989) 00:08:46.691 fused_ordering(990) 00:08:46.691 fused_ordering(991) 00:08:46.691 fused_ordering(992) 00:08:46.691 fused_ordering(993) 00:08:46.691 fused_ordering(994) 00:08:46.691 fused_ordering(995) 00:08:46.691 fused_ordering(996) 00:08:46.691 fused_ordering(997) 00:08:46.691 fused_ordering(998) 00:08:46.691 fused_ordering(999) 00:08:46.691 fused_ordering(1000) 00:08:46.691 fused_ordering(1001) 00:08:46.691 fused_ordering(1002) 00:08:46.691 fused_ordering(1003) 00:08:46.691 fused_ordering(1004) 00:08:46.691 fused_ordering(1005) 00:08:46.691 fused_ordering(1006) 00:08:46.691 fused_ordering(1007) 00:08:46.691 fused_ordering(1008) 00:08:46.691 fused_ordering(1009) 00:08:46.691 fused_ordering(1010) 00:08:46.691 fused_ordering(1011) 00:08:46.691 fused_ordering(1012) 00:08:46.691 fused_ordering(1013) 00:08:46.691 fused_ordering(1014) 00:08:46.691 fused_ordering(1015) 00:08:46.691 fused_ordering(1016) 00:08:46.691 fused_ordering(1017) 00:08:46.691 fused_ordering(1018) 00:08:46.691 fused_ordering(1019) 00:08:46.691 fused_ordering(1020) 00:08:46.691 fused_ordering(1021) 00:08:46.691 fused_ordering(1022) 00:08:46.691 fused_ordering(1023) 00:08:46.691 13:37:49 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:46.691 13:37:49 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:46.691 13:37:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:46.691 13:37:49 -- nvmf/common.sh@117 -- # sync 00:08:46.691 13:37:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:46.691 13:37:49 -- nvmf/common.sh@120 -- # set +e 00:08:46.691 13:37:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:46.691 13:37:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:46.691 rmmod nvme_tcp 00:08:46.691 rmmod nvme_fabrics 00:08:46.691 rmmod nvme_keyring 00:08:46.691 13:37:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:46.691 13:37:49 -- nvmf/common.sh@124 -- # set -e 00:08:46.691 13:37:49 -- nvmf/common.sh@125 -- # return 0 00:08:46.691 13:37:49 -- nvmf/common.sh@478 -- # '[' -n 2540791 ']' 00:08:46.691 13:37:49 -- nvmf/common.sh@479 -- # killprocess 2540791 00:08:46.691 13:37:49 -- common/autotest_common.sh@936 -- # '[' -z 2540791 ']' 00:08:46.691 13:37:49 -- common/autotest_common.sh@940 -- # kill -0 2540791 00:08:46.691 13:37:49 -- common/autotest_common.sh@941 -- # uname 00:08:46.691 13:37:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:46.691 13:37:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2540791 00:08:46.691 13:37:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:46.691 13:37:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:46.691 13:37:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2540791' 00:08:46.691 killing process with pid 2540791 00:08:46.691 13:37:49 -- common/autotest_common.sh@955 -- # kill 2540791 00:08:46.691 13:37:49 -- common/autotest_common.sh@960 -- # wait 2540791 00:08:46.949 13:37:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:46.949 13:37:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:46.949 13:37:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:46.949 13:37:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.949 13:37:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:46.949 13:37:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.949 13:37:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.949 13:37:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.485 13:37:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:49.485 00:08:49.485 real 0m8.586s 00:08:49.485 user 0m6.239s 00:08:49.485 sys 0m3.835s 00:08:49.485 13:37:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:49.485 13:37:51 -- common/autotest_common.sh@10 -- # set +x 00:08:49.485 ************************************ 00:08:49.485 END TEST nvmf_fused_ordering 00:08:49.485 ************************************ 00:08:49.485 13:37:51 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:49.485 13:37:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:49.485 13:37:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.485 13:37:51 -- common/autotest_common.sh@10 -- # set +x 00:08:49.485 ************************************ 00:08:49.485 START TEST nvmf_delete_subsystem 00:08:49.485 ************************************ 00:08:49.485 13:37:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:49.485 * Looking for test storage... 00:08:49.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.485 13:37:51 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.485 13:37:51 -- nvmf/common.sh@7 -- # uname -s 00:08:49.485 13:37:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.485 13:37:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.485 13:37:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.485 13:37:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.485 13:37:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.485 13:37:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.485 13:37:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.485 13:37:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.485 13:37:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.485 13:37:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.485 13:37:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:49.485 13:37:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:49.485 13:37:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.485 13:37:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.485 13:37:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.485 13:37:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.485 13:37:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.485 13:37:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.485 13:37:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.485 13:37:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.485 13:37:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.485 13:37:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.485 13:37:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.485 13:37:51 -- paths/export.sh@5 -- # export PATH 00:08:49.485 13:37:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.485 13:37:51 -- nvmf/common.sh@47 -- # : 0 00:08:49.485 13:37:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.485 13:37:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.485 13:37:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.485 13:37:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.485 13:37:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.485 13:37:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.485 13:37:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.485 13:37:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.485 13:37:51 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:49.485 13:37:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:49.485 13:37:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.485 13:37:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:49.485 13:37:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:49.485 13:37:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:49.485 13:37:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.485 13:37:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.485 13:37:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.485 13:37:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:49.485 13:37:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:49.485 13:37:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:49.485 13:37:51 -- common/autotest_common.sh@10 -- # set +x 00:08:51.385 13:37:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:51.385 13:37:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:51.385 13:37:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:51.385 13:37:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:51.385 13:37:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:51.385 13:37:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:51.385 13:37:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:51.385 13:37:53 -- nvmf/common.sh@295 -- # net_devs=() 00:08:51.385 13:37:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:51.385 13:37:53 -- nvmf/common.sh@296 -- # e810=() 00:08:51.385 13:37:53 -- nvmf/common.sh@296 -- # local -ga e810 00:08:51.385 13:37:53 -- nvmf/common.sh@297 -- # x722=() 00:08:51.385 13:37:53 -- nvmf/common.sh@297 -- # local -ga x722 00:08:51.385 13:37:53 -- nvmf/common.sh@298 -- # mlx=() 00:08:51.385 13:37:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:51.385 13:37:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.385 13:37:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.385 13:37:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.385 13:37:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.385 13:37:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.385 13:37:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.385 13:37:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.385 13:37:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.385 13:37:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.385 13:37:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.385 13:37:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.385 13:37:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:51.385 13:37:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:51.385 13:37:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:51.385 13:37:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.385 13:37:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:51.385 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:51.385 13:37:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.385 13:37:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:51.385 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:51.385 13:37:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:51.385 13:37:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.385 13:37:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.385 13:37:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:51.385 13:37:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.385 13:37:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:51.385 Found net devices under 0000:84:00.0: cvl_0_0 00:08:51.385 13:37:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.385 13:37:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.385 13:37:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.385 13:37:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:51.385 13:37:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.385 13:37:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:51.385 Found net devices under 0000:84:00.1: cvl_0_1 00:08:51.385 13:37:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.385 13:37:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:51.385 13:37:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:51.385 13:37:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:51.385 13:37:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:51.385 13:37:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.385 13:37:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.385 13:37:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.385 13:37:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:51.385 13:37:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.385 13:37:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.385 13:37:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:51.385 13:37:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.385 13:37:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.385 13:37:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:51.385 13:37:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:51.385 13:37:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.386 13:37:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.386 13:37:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.386 13:37:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.386 13:37:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:51.386 13:37:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.386 13:37:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.386 13:37:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.386 13:37:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:51.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:08:51.386 00:08:51.386 --- 10.0.0.2 ping statistics --- 00:08:51.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.386 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:08:51.386 13:37:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:08:51.386 00:08:51.386 --- 10.0.0.1 ping statistics --- 00:08:51.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.386 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:08:51.386 13:37:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.386 13:37:54 -- nvmf/common.sh@411 -- # return 0 00:08:51.386 13:37:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:51.386 13:37:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.386 13:37:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:51.386 13:37:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:51.386 13:37:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.386 13:37:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:51.386 13:37:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:51.386 13:37:54 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:51.386 13:37:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:51.386 13:37:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:51.386 13:37:54 -- common/autotest_common.sh@10 -- # set +x 00:08:51.386 13:37:54 -- nvmf/common.sh@470 -- # nvmfpid=2543266 00:08:51.386 13:37:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:51.386 13:37:54 -- nvmf/common.sh@471 -- # waitforlisten 2543266 00:08:51.386 13:37:54 -- common/autotest_common.sh@817 -- # '[' -z 2543266 ']' 00:08:51.386 13:37:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.386 13:37:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:51.386 13:37:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.386 13:37:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:51.386 13:37:54 -- common/autotest_common.sh@10 -- # set +x 00:08:51.386 [2024-04-18 13:37:54.087993] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:08:51.386 [2024-04-18 13:37:54.088071] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.386 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.386 [2024-04-18 13:37:54.156907] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:51.644 [2024-04-18 13:37:54.276631] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.644 [2024-04-18 13:37:54.276700] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.644 [2024-04-18 13:37:54.276717] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.644 [2024-04-18 13:37:54.276731] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.644 [2024-04-18 13:37:54.276742] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.644 [2024-04-18 13:37:54.276831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.644 [2024-04-18 13:37:54.276837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.575 13:37:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:52.575 13:37:55 -- common/autotest_common.sh@850 -- # return 0 00:08:52.575 13:37:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:52.575 13:37:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:52.575 13:37:55 -- common/autotest_common.sh@10 -- # set +x 00:08:52.575 13:37:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.575 13:37:55 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:52.575 13:37:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.575 13:37:55 -- common/autotest_common.sh@10 -- # set +x 00:08:52.575 [2024-04-18 13:37:55.064986] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.575 13:37:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.575 13:37:55 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:52.575 13:37:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.575 13:37:55 -- common/autotest_common.sh@10 -- # set +x 00:08:52.575 13:37:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.575 13:37:55 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:52.575 13:37:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.575 13:37:55 -- common/autotest_common.sh@10 -- # set +x 00:08:52.575 [2024-04-18 13:37:55.081191] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.575 13:37:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.575 13:37:55 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:52.575 13:37:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.575 13:37:55 -- common/autotest_common.sh@10 -- # set +x 00:08:52.575 NULL1 00:08:52.575 13:37:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.575 13:37:55 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:52.575 13:37:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.575 13:37:55 -- common/autotest_common.sh@10 -- # set +x 00:08:52.575 Delay0 00:08:52.575 13:37:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.575 13:37:55 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.575 13:37:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.575 13:37:55 -- common/autotest_common.sh@10 -- # set +x 00:08:52.575 13:37:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.575 13:37:55 -- target/delete_subsystem.sh@28 -- # perf_pid=2543358 00:08:52.575 13:37:55 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:52.575 13:37:55 -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:52.575 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.576 [2024-04-18 13:37:55.155863] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:54.472 13:37:57 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.472 13:37:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:54.472 13:37:57 -- common/autotest_common.sh@10 -- # set +x 00:08:54.730 Write completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Write completed with error (sct=0, sc=8) 00:08:54.730 starting I/O failed: -6 00:08:54.730 Write completed with error (sct=0, sc=8) 00:08:54.730 Write completed with error (sct=0, sc=8) 00:08:54.730 Write completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 starting I/O failed: -6 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 starting I/O failed: -6 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 starting I/O failed: -6 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 starting I/O failed: -6 00:08:54.730 Write completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 starting I/O failed: -6 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Write completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 starting I/O failed: -6 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 starting I/O failed: -6 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Write completed with error (sct=0, sc=8) 00:08:54.730 Write completed with error (sct=0, sc=8) 00:08:54.730 Write completed with error (sct=0, sc=8) 00:08:54.730 starting I/O failed: -6 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Write completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 starting I/O failed: -6 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 starting I/O failed: -6 00:08:54.730 Write completed with error (sct=0, sc=8) 00:08:54.730 Write completed with error (sct=0, sc=8) 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 [2024-04-18 13:37:57.328141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffa60000c00 is same with the state(5) to be set 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.730 starting I/O failed: -6 00:08:54.730 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 starting I/O failed: -6 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 starting I/O failed: -6 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 starting I/O failed: -6 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 starting I/O failed: -6 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 starting I/O failed: -6 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 starting I/O failed: -6 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 starting I/O failed: -6 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 starting I/O failed: -6 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 starting I/O failed: -6 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 starting I/O failed: -6 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 starting I/O failed: -6 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 [2024-04-18 13:37:57.329030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3f450 is same with the state(5) to be set 00:08:54.731 starting I/O failed: -6 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Read completed with error (sct=0, sc=8) 00:08:54.731 Write completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Write completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Write completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Write completed with error (sct=0, sc=8) 00:08:54.732 Write completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Write completed with error (sct=0, sc=8) 00:08:54.732 Write completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Write completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Write completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Write completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Write completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Read completed with error (sct=0, sc=8) 00:08:54.732 Write completed with error (sct=0, sc=8) 00:08:54.732 Write completed with error (sct=0, sc=8) 00:08:55.664 [2024-04-18 13:37:58.296924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe40ad0 is same with the state(5) to be set 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 [2024-04-18 13:37:58.329801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffa6000bf90 is same with the state(5) to be set 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 [2024-04-18 13:37:58.330136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffa6000c510 is same with the state(5) to be set 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 [2024-04-18 13:37:58.330671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3f020 is same with the state(5) to be set 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Write completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 Read completed with error (sct=0, sc=8) 00:08:55.665 [2024-04-18 13:37:58.330880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3f710 is same with the state(5) to be set 00:08:55.665 [2024-04-18 13:37:58.331740] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe40ad0 (9): Bad file descriptor 00:08:55.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:55.665 13:37:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.665 13:37:58 -- target/delete_subsystem.sh@34 -- # delay=0 00:08:55.665 13:37:58 -- target/delete_subsystem.sh@35 -- # kill -0 2543358 00:08:55.665 13:37:58 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:55.665 Initializing NVMe Controllers 00:08:55.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:55.665 Controller IO queue size 128, less than required. 00:08:55.665 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:55.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:55.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:55.665 Initialization complete. Launching workers. 00:08:55.665 ======================================================== 00:08:55.666 Latency(us) 00:08:55.666 Device Information : IOPS MiB/s Average min max 00:08:55.666 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.13 0.09 889031.78 456.61 1013415.11 00:08:55.666 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 173.14 0.08 888120.77 733.41 1013366.01 00:08:55.666 ======================================================== 00:08:55.666 Total : 347.26 0.17 888577.58 456.61 1013415.11 00:08:55.666 00:08:56.243 13:37:58 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:56.243 13:37:58 -- target/delete_subsystem.sh@35 -- # kill -0 2543358 00:08:56.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2543358) - No such process 00:08:56.243 13:37:58 -- target/delete_subsystem.sh@45 -- # NOT wait 2543358 00:08:56.243 13:37:58 -- common/autotest_common.sh@638 -- # local es=0 00:08:56.243 13:37:58 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 2543358 00:08:56.243 13:37:58 -- common/autotest_common.sh@626 -- # local arg=wait 00:08:56.243 13:37:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:56.243 13:37:58 -- common/autotest_common.sh@630 -- # type -t wait 00:08:56.243 13:37:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:56.243 13:37:58 -- common/autotest_common.sh@641 -- # wait 2543358 00:08:56.243 13:37:58 -- common/autotest_common.sh@641 -- # es=1 00:08:56.243 13:37:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:56.243 13:37:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:56.243 13:37:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:56.243 13:37:58 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:56.243 13:37:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.243 13:37:58 -- common/autotest_common.sh@10 -- # set +x 00:08:56.243 13:37:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.243 13:37:58 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.243 13:37:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.243 13:37:58 -- common/autotest_common.sh@10 -- # set +x 00:08:56.243 [2024-04-18 13:37:58.856343] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.243 13:37:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.243 13:37:58 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.243 13:37:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.243 13:37:58 -- common/autotest_common.sh@10 -- # set +x 00:08:56.243 13:37:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.243 13:37:58 -- target/delete_subsystem.sh@54 -- # perf_pid=2543854 00:08:56.243 13:37:58 -- target/delete_subsystem.sh@56 -- # delay=0 00:08:56.243 13:37:58 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:56.243 13:37:58 -- target/delete_subsystem.sh@57 -- # kill -0 2543854 00:08:56.243 13:37:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:56.243 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.243 [2024-04-18 13:37:58.919083] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:56.808 13:37:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:56.808 13:37:59 -- target/delete_subsystem.sh@57 -- # kill -0 2543854 00:08:56.808 13:37:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:57.372 13:37:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:57.372 13:37:59 -- target/delete_subsystem.sh@57 -- # kill -0 2543854 00:08:57.372 13:37:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:57.629 13:38:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:57.629 13:38:00 -- target/delete_subsystem.sh@57 -- # kill -0 2543854 00:08:57.629 13:38:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:58.194 13:38:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:58.194 13:38:00 -- target/delete_subsystem.sh@57 -- # kill -0 2543854 00:08:58.194 13:38:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:58.759 13:38:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:58.759 13:38:01 -- target/delete_subsystem.sh@57 -- # kill -0 2543854 00:08:58.759 13:38:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:59.322 13:38:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:59.322 13:38:01 -- target/delete_subsystem.sh@57 -- # kill -0 2543854 00:08:59.322 13:38:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:59.322 Initializing NVMe Controllers 00:08:59.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:59.322 Controller IO queue size 128, less than required. 00:08:59.322 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:59.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:59.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:59.322 Initialization complete. Launching workers. 00:08:59.322 ======================================================== 00:08:59.322 Latency(us) 00:08:59.322 Device Information : IOPS MiB/s Average min max 00:08:59.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003879.54 1000309.23 1013188.37 00:08:59.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004752.44 1000219.61 1040915.06 00:08:59.322 ======================================================== 00:08:59.322 Total : 256.00 0.12 1004315.99 1000219.61 1040915.06 00:08:59.322 00:08:59.889 13:38:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:59.889 13:38:02 -- target/delete_subsystem.sh@57 -- # kill -0 2543854 00:08:59.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2543854) - No such process 00:08:59.889 13:38:02 -- target/delete_subsystem.sh@67 -- # wait 2543854 00:08:59.889 13:38:02 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:59.889 13:38:02 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:59.889 13:38:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:59.889 13:38:02 -- nvmf/common.sh@117 -- # sync 00:08:59.889 13:38:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.889 13:38:02 -- nvmf/common.sh@120 -- # set +e 00:08:59.889 13:38:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.889 13:38:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.889 rmmod nvme_tcp 00:08:59.889 rmmod nvme_fabrics 00:08:59.889 rmmod nvme_keyring 00:08:59.889 13:38:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.889 13:38:02 -- nvmf/common.sh@124 -- # set -e 00:08:59.889 13:38:02 -- nvmf/common.sh@125 -- # return 0 00:08:59.889 13:38:02 -- nvmf/common.sh@478 -- # '[' -n 2543266 ']' 00:08:59.889 13:38:02 -- nvmf/common.sh@479 -- # killprocess 2543266 00:08:59.889 13:38:02 -- common/autotest_common.sh@936 -- # '[' -z 2543266 ']' 00:08:59.889 13:38:02 -- common/autotest_common.sh@940 -- # kill -0 2543266 00:08:59.889 13:38:02 -- common/autotest_common.sh@941 -- # uname 00:08:59.889 13:38:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:59.889 13:38:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2543266 00:08:59.889 13:38:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:59.889 13:38:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:59.889 13:38:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2543266' 00:08:59.889 killing process with pid 2543266 00:08:59.889 13:38:02 -- common/autotest_common.sh@955 -- # kill 2543266 00:08:59.889 13:38:02 -- common/autotest_common.sh@960 -- # wait 2543266 00:09:00.148 13:38:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:00.148 13:38:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:00.148 13:38:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:00.148 13:38:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.148 13:38:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:00.148 13:38:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.148 13:38:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.148 13:38:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.050 13:38:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:02.050 00:09:02.050 real 0m13.002s 00:09:02.050 user 0m29.384s 00:09:02.050 sys 0m3.015s 00:09:02.050 13:38:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:02.050 13:38:04 -- common/autotest_common.sh@10 -- # set +x 00:09:02.050 ************************************ 00:09:02.050 END TEST nvmf_delete_subsystem 00:09:02.050 ************************************ 00:09:02.050 13:38:04 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:02.050 13:38:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:02.050 13:38:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.050 13:38:04 -- common/autotest_common.sh@10 -- # set +x 00:09:02.309 ************************************ 00:09:02.309 START TEST nvmf_ns_masking 00:09:02.309 ************************************ 00:09:02.309 13:38:04 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:02.309 * Looking for test storage... 00:09:02.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.309 13:38:04 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.309 13:38:04 -- nvmf/common.sh@7 -- # uname -s 00:09:02.309 13:38:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.309 13:38:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.309 13:38:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.309 13:38:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.309 13:38:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.309 13:38:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.309 13:38:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.309 13:38:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.309 13:38:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.309 13:38:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.309 13:38:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:02.309 13:38:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:02.309 13:38:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.309 13:38:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.309 13:38:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.309 13:38:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.309 13:38:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.309 13:38:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.309 13:38:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.309 13:38:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.309 13:38:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.309 13:38:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.309 13:38:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.309 13:38:05 -- paths/export.sh@5 -- # export PATH 00:09:02.309 13:38:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.309 13:38:05 -- nvmf/common.sh@47 -- # : 0 00:09:02.309 13:38:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.309 13:38:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.309 13:38:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.309 13:38:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.309 13:38:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.309 13:38:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.309 13:38:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.309 13:38:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.309 13:38:05 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.309 13:38:05 -- target/ns_masking.sh@11 -- # loops=5 00:09:02.309 13:38:05 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:02.309 13:38:05 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:09:02.309 13:38:05 -- target/ns_masking.sh@15 -- # uuidgen 00:09:02.309 13:38:05 -- target/ns_masking.sh@15 -- # HOSTID=db8ee65a-1b0d-41c9-8794-203199a6f8d7 00:09:02.309 13:38:05 -- target/ns_masking.sh@44 -- # nvmftestinit 00:09:02.309 13:38:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:02.309 13:38:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.309 13:38:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:02.309 13:38:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:02.309 13:38:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:02.309 13:38:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.309 13:38:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.309 13:38:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.309 13:38:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:02.309 13:38:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:02.309 13:38:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:02.309 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:09:04.238 13:38:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:04.238 13:38:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:04.238 13:38:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:04.238 13:38:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:04.238 13:38:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:04.238 13:38:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:04.238 13:38:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:04.238 13:38:06 -- nvmf/common.sh@295 -- # net_devs=() 00:09:04.238 13:38:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:04.238 13:38:06 -- nvmf/common.sh@296 -- # e810=() 00:09:04.238 13:38:06 -- nvmf/common.sh@296 -- # local -ga e810 00:09:04.238 13:38:06 -- nvmf/common.sh@297 -- # x722=() 00:09:04.238 13:38:06 -- nvmf/common.sh@297 -- # local -ga x722 00:09:04.238 13:38:06 -- nvmf/common.sh@298 -- # mlx=() 00:09:04.238 13:38:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:04.238 13:38:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.238 13:38:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.238 13:38:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.238 13:38:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.238 13:38:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.238 13:38:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.238 13:38:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.238 13:38:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.238 13:38:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.238 13:38:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.238 13:38:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.238 13:38:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:04.238 13:38:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:04.238 13:38:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:04.238 13:38:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.238 13:38:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:04.238 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:04.238 13:38:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.238 13:38:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:04.238 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:04.238 13:38:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:04.238 13:38:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.238 13:38:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.238 13:38:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:04.238 13:38:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.238 13:38:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:04.238 Found net devices under 0000:84:00.0: cvl_0_0 00:09:04.238 13:38:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.238 13:38:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.238 13:38:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.238 13:38:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:04.238 13:38:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.238 13:38:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:04.238 Found net devices under 0000:84:00.1: cvl_0_1 00:09:04.238 13:38:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.238 13:38:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:04.238 13:38:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:04.238 13:38:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:04.238 13:38:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:04.238 13:38:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.238 13:38:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.238 13:38:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.238 13:38:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:04.238 13:38:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.238 13:38:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.238 13:38:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:04.238 13:38:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.238 13:38:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.238 13:38:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:04.238 13:38:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:04.238 13:38:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.238 13:38:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.238 13:38:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.496 13:38:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.496 13:38:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:04.496 13:38:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:04.497 13:38:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:04.497 13:38:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:04.497 13:38:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:04.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:09:04.497 00:09:04.497 --- 10.0.0.2 ping statistics --- 00:09:04.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.497 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:09:04.497 13:38:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:04.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:09:04.497 00:09:04.497 --- 10.0.0.1 ping statistics --- 00:09:04.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.497 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:09:04.497 13:38:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.497 13:38:07 -- nvmf/common.sh@411 -- # return 0 00:09:04.497 13:38:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:04.497 13:38:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.497 13:38:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:04.497 13:38:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:04.497 13:38:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.497 13:38:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:04.497 13:38:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:04.497 13:38:07 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:09:04.497 13:38:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:04.497 13:38:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:04.497 13:38:07 -- common/autotest_common.sh@10 -- # set +x 00:09:04.497 13:38:07 -- nvmf/common.sh@470 -- # nvmfpid=2546228 00:09:04.497 13:38:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:04.497 13:38:07 -- nvmf/common.sh@471 -- # waitforlisten 2546228 00:09:04.497 13:38:07 -- common/autotest_common.sh@817 -- # '[' -z 2546228 ']' 00:09:04.497 13:38:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.497 13:38:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:04.497 13:38:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.497 13:38:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:04.497 13:38:07 -- common/autotest_common.sh@10 -- # set +x 00:09:04.497 [2024-04-18 13:38:07.178348] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:09:04.497 [2024-04-18 13:38:07.178430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.497 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.497 [2024-04-18 13:38:07.244270] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.754 [2024-04-18 13:38:07.355731] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.754 [2024-04-18 13:38:07.355789] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.754 [2024-04-18 13:38:07.355817] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.754 [2024-04-18 13:38:07.355829] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.754 [2024-04-18 13:38:07.355839] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.754 [2024-04-18 13:38:07.355891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.754 [2024-04-18 13:38:07.356013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.754 [2024-04-18 13:38:07.356043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.754 [2024-04-18 13:38:07.356046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.754 13:38:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:04.754 13:38:07 -- common/autotest_common.sh@850 -- # return 0 00:09:04.754 13:38:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:04.754 13:38:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:04.754 13:38:07 -- common/autotest_common.sh@10 -- # set +x 00:09:04.754 13:38:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.754 13:38:07 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:05.011 [2024-04-18 13:38:07.759774] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.011 13:38:07 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:09:05.011 13:38:07 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:09:05.011 13:38:07 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:05.269 Malloc1 00:09:05.269 13:38:08 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:05.526 Malloc2 00:09:05.526 13:38:08 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:05.784 13:38:08 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:06.041 13:38:08 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.298 [2024-04-18 13:38:08.994734] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.298 13:38:09 -- target/ns_masking.sh@61 -- # connect 00:09:06.298 13:38:09 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I db8ee65a-1b0d-41c9-8794-203199a6f8d7 -a 10.0.0.2 -s 4420 -i 4 00:09:06.555 13:38:09 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:09:06.555 13:38:09 -- common/autotest_common.sh@1184 -- # local i=0 00:09:06.555 13:38:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.555 13:38:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:06.555 13:38:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:08.448 13:38:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:08.448 13:38:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:08.448 13:38:11 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.448 13:38:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:08.448 13:38:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.448 13:38:11 -- common/autotest_common.sh@1194 -- # return 0 00:09:08.448 13:38:11 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:08.448 13:38:11 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:08.448 13:38:11 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:08.448 13:38:11 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:08.448 13:38:11 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:09:08.448 13:38:11 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:08.448 13:38:11 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:08.448 [ 0]:0x1 00:09:08.448 13:38:11 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:08.448 13:38:11 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:08.448 13:38:11 -- target/ns_masking.sh@40 -- # nguid=753ed96f782c423c91d392d82482778b 00:09:08.448 13:38:11 -- target/ns_masking.sh@41 -- # [[ 753ed96f782c423c91d392d82482778b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.448 13:38:11 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:08.704 13:38:11 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:09:08.704 13:38:11 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:08.704 13:38:11 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:08.704 [ 0]:0x1 00:09:08.704 13:38:11 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:08.704 13:38:11 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:08.704 13:38:11 -- target/ns_masking.sh@40 -- # nguid=753ed96f782c423c91d392d82482778b 00:09:08.704 13:38:11 -- target/ns_masking.sh@41 -- # [[ 753ed96f782c423c91d392d82482778b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.704 13:38:11 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:09:08.704 13:38:11 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:08.704 13:38:11 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:08.704 [ 1]:0x2 00:09:08.704 13:38:11 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:08.704 13:38:11 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:09.026 13:38:11 -- target/ns_masking.sh@40 -- # nguid=6151dace14b844408901cdc0d90dbf01 00:09:09.026 13:38:11 -- target/ns_masking.sh@41 -- # [[ 6151dace14b844408901cdc0d90dbf01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:09.026 13:38:11 -- target/ns_masking.sh@69 -- # disconnect 00:09:09.026 13:38:11 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:09.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.026 13:38:11 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.282 13:38:11 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:09.539 13:38:12 -- target/ns_masking.sh@77 -- # connect 1 00:09:09.539 13:38:12 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I db8ee65a-1b0d-41c9-8794-203199a6f8d7 -a 10.0.0.2 -s 4420 -i 4 00:09:09.539 13:38:12 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:09.539 13:38:12 -- common/autotest_common.sh@1184 -- # local i=0 00:09:09.539 13:38:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.539 13:38:12 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:09:09.539 13:38:12 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:09:09.539 13:38:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:12.063 13:38:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:12.063 13:38:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:12.063 13:38:14 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:12.063 13:38:14 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:12.063 13:38:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:12.063 13:38:14 -- common/autotest_common.sh@1194 -- # return 0 00:09:12.063 13:38:14 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:12.063 13:38:14 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:12.063 13:38:14 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:12.063 13:38:14 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:12.063 13:38:14 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:09:12.063 13:38:14 -- common/autotest_common.sh@638 -- # local es=0 00:09:12.063 13:38:14 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:12.063 13:38:14 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:12.063 13:38:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:12.063 13:38:14 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:12.063 13:38:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:12.063 13:38:14 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:12.063 13:38:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:12.063 13:38:14 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:12.063 13:38:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:12.063 13:38:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:12.063 13:38:14 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:12.063 13:38:14 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:12.063 13:38:14 -- common/autotest_common.sh@641 -- # es=1 00:09:12.063 13:38:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:12.063 13:38:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:12.063 13:38:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:12.063 13:38:14 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:09:12.063 13:38:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:12.063 13:38:14 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:12.063 [ 0]:0x2 00:09:12.063 13:38:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:12.063 13:38:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:12.063 13:38:14 -- target/ns_masking.sh@40 -- # nguid=6151dace14b844408901cdc0d90dbf01 00:09:12.063 13:38:14 -- target/ns_masking.sh@41 -- # [[ 6151dace14b844408901cdc0d90dbf01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:12.063 13:38:14 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:12.063 13:38:14 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:09:12.063 13:38:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:12.063 13:38:14 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:12.063 [ 0]:0x1 00:09:12.063 13:38:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:12.063 13:38:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:12.063 13:38:14 -- target/ns_masking.sh@40 -- # nguid=753ed96f782c423c91d392d82482778b 00:09:12.063 13:38:14 -- target/ns_masking.sh@41 -- # [[ 753ed96f782c423c91d392d82482778b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:12.063 13:38:14 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:09:12.063 13:38:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:12.063 13:38:14 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:12.063 [ 1]:0x2 00:09:12.063 13:38:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:12.063 13:38:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:12.063 13:38:14 -- target/ns_masking.sh@40 -- # nguid=6151dace14b844408901cdc0d90dbf01 00:09:12.063 13:38:14 -- target/ns_masking.sh@41 -- # [[ 6151dace14b844408901cdc0d90dbf01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:12.063 13:38:14 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:12.320 13:38:15 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:09:12.320 13:38:15 -- common/autotest_common.sh@638 -- # local es=0 00:09:12.320 13:38:15 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:12.320 13:38:15 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:12.320 13:38:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:12.320 13:38:15 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:12.320 13:38:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:12.320 13:38:15 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:12.320 13:38:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:12.320 13:38:15 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:12.320 13:38:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:12.320 13:38:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:12.320 13:38:15 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:12.320 13:38:15 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:12.320 13:38:15 -- common/autotest_common.sh@641 -- # es=1 00:09:12.320 13:38:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:12.320 13:38:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:12.320 13:38:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:12.320 13:38:15 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:09:12.320 13:38:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:12.320 13:38:15 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:12.320 [ 0]:0x2 00:09:12.320 13:38:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:12.320 13:38:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:12.577 13:38:15 -- target/ns_masking.sh@40 -- # nguid=6151dace14b844408901cdc0d90dbf01 00:09:12.577 13:38:15 -- target/ns_masking.sh@41 -- # [[ 6151dace14b844408901cdc0d90dbf01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:12.578 13:38:15 -- target/ns_masking.sh@91 -- # disconnect 00:09:12.578 13:38:15 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:12.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.578 13:38:15 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:12.835 13:38:15 -- target/ns_masking.sh@95 -- # connect 2 00:09:12.835 13:38:15 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I db8ee65a-1b0d-41c9-8794-203199a6f8d7 -a 10.0.0.2 -s 4420 -i 4 00:09:12.835 13:38:15 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:12.835 13:38:15 -- common/autotest_common.sh@1184 -- # local i=0 00:09:12.835 13:38:15 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:12.835 13:38:15 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:09:12.835 13:38:15 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:09:12.835 13:38:15 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:15.360 13:38:17 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:15.360 13:38:17 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:15.360 13:38:17 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.360 13:38:17 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:09:15.360 13:38:17 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.360 13:38:17 -- common/autotest_common.sh@1194 -- # return 0 00:09:15.360 13:38:17 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:15.360 13:38:17 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:15.360 13:38:17 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:15.360 13:38:17 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:15.360 13:38:17 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:09:15.360 13:38:17 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:15.360 13:38:17 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:15.360 [ 0]:0x1 00:09:15.360 13:38:17 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:15.360 13:38:17 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:15.360 13:38:17 -- target/ns_masking.sh@40 -- # nguid=753ed96f782c423c91d392d82482778b 00:09:15.360 13:38:17 -- target/ns_masking.sh@41 -- # [[ 753ed96f782c423c91d392d82482778b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:15.360 13:38:17 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:09:15.360 13:38:17 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:15.360 13:38:17 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:15.360 [ 1]:0x2 00:09:15.360 13:38:17 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:15.360 13:38:17 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:15.360 13:38:17 -- target/ns_masking.sh@40 -- # nguid=6151dace14b844408901cdc0d90dbf01 00:09:15.360 13:38:17 -- target/ns_masking.sh@41 -- # [[ 6151dace14b844408901cdc0d90dbf01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:15.360 13:38:17 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:15.360 13:38:18 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:09:15.360 13:38:18 -- common/autotest_common.sh@638 -- # local es=0 00:09:15.360 13:38:18 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:15.360 13:38:18 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:15.360 13:38:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:15.360 13:38:18 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:15.360 13:38:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:15.360 13:38:18 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:15.360 13:38:18 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:15.360 13:38:18 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:15.360 13:38:18 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:15.360 13:38:18 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:15.360 13:38:18 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:15.360 13:38:18 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:15.360 13:38:18 -- common/autotest_common.sh@641 -- # es=1 00:09:15.360 13:38:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:15.360 13:38:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:15.360 13:38:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:15.360 13:38:18 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:09:15.360 13:38:18 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:15.360 13:38:18 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:15.360 [ 0]:0x2 00:09:15.360 13:38:18 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:15.360 13:38:18 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:15.360 13:38:18 -- target/ns_masking.sh@40 -- # nguid=6151dace14b844408901cdc0d90dbf01 00:09:15.360 13:38:18 -- target/ns_masking.sh@41 -- # [[ 6151dace14b844408901cdc0d90dbf01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:15.360 13:38:18 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:15.360 13:38:18 -- common/autotest_common.sh@638 -- # local es=0 00:09:15.360 13:38:18 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:15.360 13:38:18 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:15.360 13:38:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:15.360 13:38:18 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:15.360 13:38:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:15.361 13:38:18 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:15.361 13:38:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:15.361 13:38:18 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:15.361 13:38:18 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:15.361 13:38:18 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:15.618 [2024-04-18 13:38:18.328548] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:15.618 request: 00:09:15.618 { 00:09:15.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.618 "nsid": 2, 00:09:15.618 "host": "nqn.2016-06.io.spdk:host1", 00:09:15.618 "method": "nvmf_ns_remove_host", 00:09:15.618 "req_id": 1 00:09:15.618 } 00:09:15.618 Got JSON-RPC error response 00:09:15.618 response: 00:09:15.618 { 00:09:15.618 "code": -32602, 00:09:15.618 "message": "Invalid parameters" 00:09:15.618 } 00:09:15.618 13:38:18 -- common/autotest_common.sh@641 -- # es=1 00:09:15.618 13:38:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:15.618 13:38:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:15.618 13:38:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:15.618 13:38:18 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:09:15.619 13:38:18 -- common/autotest_common.sh@638 -- # local es=0 00:09:15.619 13:38:18 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:15.619 13:38:18 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:15.619 13:38:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:15.619 13:38:18 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:15.619 13:38:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:15.619 13:38:18 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:15.619 13:38:18 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:15.619 13:38:18 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:15.619 13:38:18 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:15.619 13:38:18 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:15.619 13:38:18 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:15.619 13:38:18 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:15.619 13:38:18 -- common/autotest_common.sh@641 -- # es=1 00:09:15.619 13:38:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:15.619 13:38:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:15.619 13:38:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:15.619 13:38:18 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:09:15.619 13:38:18 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:15.619 13:38:18 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:15.619 [ 0]:0x2 00:09:15.619 13:38:18 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:15.619 13:38:18 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:15.876 13:38:18 -- target/ns_masking.sh@40 -- # nguid=6151dace14b844408901cdc0d90dbf01 00:09:15.876 13:38:18 -- target/ns_masking.sh@41 -- # [[ 6151dace14b844408901cdc0d90dbf01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:15.876 13:38:18 -- target/ns_masking.sh@108 -- # disconnect 00:09:15.876 13:38:18 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.876 13:38:18 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.134 13:38:18 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:09:16.134 13:38:18 -- target/ns_masking.sh@114 -- # nvmftestfini 00:09:16.134 13:38:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:16.134 13:38:18 -- nvmf/common.sh@117 -- # sync 00:09:16.134 13:38:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:16.134 13:38:18 -- nvmf/common.sh@120 -- # set +e 00:09:16.134 13:38:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:16.134 13:38:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:16.134 rmmod nvme_tcp 00:09:16.134 rmmod nvme_fabrics 00:09:16.134 rmmod nvme_keyring 00:09:16.134 13:38:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:16.134 13:38:18 -- nvmf/common.sh@124 -- # set -e 00:09:16.134 13:38:18 -- nvmf/common.sh@125 -- # return 0 00:09:16.134 13:38:18 -- nvmf/common.sh@478 -- # '[' -n 2546228 ']' 00:09:16.134 13:38:18 -- nvmf/common.sh@479 -- # killprocess 2546228 00:09:16.134 13:38:18 -- common/autotest_common.sh@936 -- # '[' -z 2546228 ']' 00:09:16.134 13:38:18 -- common/autotest_common.sh@940 -- # kill -0 2546228 00:09:16.134 13:38:18 -- common/autotest_common.sh@941 -- # uname 00:09:16.134 13:38:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:16.134 13:38:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2546228 00:09:16.134 13:38:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:16.134 13:38:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:16.134 13:38:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2546228' 00:09:16.134 killing process with pid 2546228 00:09:16.134 13:38:18 -- common/autotest_common.sh@955 -- # kill 2546228 00:09:16.134 13:38:18 -- common/autotest_common.sh@960 -- # wait 2546228 00:09:16.393 13:38:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:16.393 13:38:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:16.393 13:38:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:16.393 13:38:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:16.393 13:38:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:16.393 13:38:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.393 13:38:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:16.393 13:38:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.926 13:38:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:18.926 00:09:18.926 real 0m16.229s 00:09:18.926 user 0m50.088s 00:09:18.926 sys 0m3.671s 00:09:18.926 13:38:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:18.926 13:38:21 -- common/autotest_common.sh@10 -- # set +x 00:09:18.926 ************************************ 00:09:18.926 END TEST nvmf_ns_masking 00:09:18.926 ************************************ 00:09:18.926 13:38:21 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:18.926 13:38:21 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:18.926 13:38:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:18.926 13:38:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:18.926 13:38:21 -- common/autotest_common.sh@10 -- # set +x 00:09:18.926 ************************************ 00:09:18.926 START TEST nvmf_nvme_cli 00:09:18.926 ************************************ 00:09:18.926 13:38:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:18.926 * Looking for test storage... 00:09:18.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:18.926 13:38:21 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.926 13:38:21 -- nvmf/common.sh@7 -- # uname -s 00:09:18.926 13:38:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.926 13:38:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.926 13:38:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.926 13:38:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.926 13:38:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.926 13:38:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.926 13:38:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.926 13:38:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.926 13:38:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.926 13:38:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.926 13:38:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:18.926 13:38:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:18.926 13:38:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.926 13:38:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.926 13:38:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.926 13:38:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.926 13:38:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.926 13:38:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.926 13:38:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.926 13:38:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.927 13:38:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.927 13:38:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.927 13:38:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.927 13:38:21 -- paths/export.sh@5 -- # export PATH 00:09:18.927 13:38:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.927 13:38:21 -- nvmf/common.sh@47 -- # : 0 00:09:18.927 13:38:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:18.927 13:38:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:18.927 13:38:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.927 13:38:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.927 13:38:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.927 13:38:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:18.927 13:38:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:18.927 13:38:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:18.927 13:38:21 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:18.927 13:38:21 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:18.927 13:38:21 -- target/nvme_cli.sh@14 -- # devs=() 00:09:18.927 13:38:21 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:18.927 13:38:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:18.927 13:38:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.927 13:38:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:18.927 13:38:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:18.927 13:38:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:18.927 13:38:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.927 13:38:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.927 13:38:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.927 13:38:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:18.927 13:38:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:18.927 13:38:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:18.927 13:38:21 -- common/autotest_common.sh@10 -- # set +x 00:09:20.828 13:38:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:20.828 13:38:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:20.828 13:38:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:20.828 13:38:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:20.828 13:38:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:20.828 13:38:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:20.828 13:38:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:20.828 13:38:23 -- nvmf/common.sh@295 -- # net_devs=() 00:09:20.828 13:38:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:20.828 13:38:23 -- nvmf/common.sh@296 -- # e810=() 00:09:20.828 13:38:23 -- nvmf/common.sh@296 -- # local -ga e810 00:09:20.828 13:38:23 -- nvmf/common.sh@297 -- # x722=() 00:09:20.828 13:38:23 -- nvmf/common.sh@297 -- # local -ga x722 00:09:20.828 13:38:23 -- nvmf/common.sh@298 -- # mlx=() 00:09:20.828 13:38:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:20.828 13:38:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.828 13:38:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.828 13:38:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.828 13:38:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.828 13:38:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.828 13:38:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.828 13:38:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.828 13:38:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.828 13:38:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.828 13:38:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.828 13:38:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.828 13:38:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:20.828 13:38:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:20.828 13:38:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:20.828 13:38:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.828 13:38:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:20.828 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:20.828 13:38:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.828 13:38:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:20.828 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:20.828 13:38:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:20.828 13:38:23 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.828 13:38:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.828 13:38:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:20.828 13:38:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.828 13:38:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:20.828 Found net devices under 0000:84:00.0: cvl_0_0 00:09:20.828 13:38:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.828 13:38:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.828 13:38:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.828 13:38:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:20.828 13:38:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.828 13:38:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:20.828 Found net devices under 0000:84:00.1: cvl_0_1 00:09:20.828 13:38:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.828 13:38:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:20.828 13:38:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:20.828 13:38:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:20.828 13:38:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:20.828 13:38:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.828 13:38:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.828 13:38:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.828 13:38:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:20.828 13:38:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.828 13:38:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.828 13:38:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:20.828 13:38:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.828 13:38:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.828 13:38:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:20.828 13:38:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:20.828 13:38:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.828 13:38:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.828 13:38:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.828 13:38:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.828 13:38:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:20.828 13:38:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.829 13:38:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.829 13:38:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.829 13:38:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:20.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:09:20.829 00:09:20.829 --- 10.0.0.2 ping statistics --- 00:09:20.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.829 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:09:20.829 13:38:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:09:20.829 00:09:20.829 --- 10.0.0.1 ping statistics --- 00:09:20.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.829 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:09:20.829 13:38:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.829 13:38:23 -- nvmf/common.sh@411 -- # return 0 00:09:20.829 13:38:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:20.829 13:38:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.829 13:38:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:20.829 13:38:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:20.829 13:38:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.829 13:38:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:20.829 13:38:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:20.829 13:38:23 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:20.829 13:38:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:20.829 13:38:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:20.829 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:09:20.829 13:38:23 -- nvmf/common.sh@470 -- # nvmfpid=2549709 00:09:20.829 13:38:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:20.829 13:38:23 -- nvmf/common.sh@471 -- # waitforlisten 2549709 00:09:20.829 13:38:23 -- common/autotest_common.sh@817 -- # '[' -z 2549709 ']' 00:09:20.829 13:38:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.829 13:38:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:20.829 13:38:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.829 13:38:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:20.829 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:09:20.829 [2024-04-18 13:38:23.511598] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:09:20.829 [2024-04-18 13:38:23.511674] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.829 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.829 [2024-04-18 13:38:23.577478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.089 [2024-04-18 13:38:23.688234] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.089 [2024-04-18 13:38:23.688293] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.089 [2024-04-18 13:38:23.688322] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.089 [2024-04-18 13:38:23.688334] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.089 [2024-04-18 13:38:23.688344] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.089 [2024-04-18 13:38:23.688396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.089 [2024-04-18 13:38:23.688513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.090 [2024-04-18 13:38:23.688544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.090 [2024-04-18 13:38:23.688546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.090 13:38:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:21.090 13:38:23 -- common/autotest_common.sh@850 -- # return 0 00:09:21.090 13:38:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:21.090 13:38:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:21.090 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:09:21.090 13:38:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.090 13:38:23 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.090 13:38:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.090 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:09:21.090 [2024-04-18 13:38:23.835759] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.090 13:38:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.090 13:38:23 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:21.090 13:38:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.090 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:09:21.090 Malloc0 00:09:21.090 13:38:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.090 13:38:23 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:21.090 13:38:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.090 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:09:21.090 Malloc1 00:09:21.090 13:38:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.090 13:38:23 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:21.090 13:38:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.090 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:09:21.396 13:38:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.396 13:38:23 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:21.396 13:38:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.396 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:09:21.396 13:38:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.396 13:38:23 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:21.396 13:38:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.396 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:09:21.396 13:38:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.396 13:38:23 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.396 13:38:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.396 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:09:21.396 [2024-04-18 13:38:23.917403] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.396 13:38:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.396 13:38:23 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:21.396 13:38:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.396 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:09:21.396 13:38:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.396 13:38:23 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:09:21.396 00:09:21.396 Discovery Log Number of Records 2, Generation counter 2 00:09:21.396 =====Discovery Log Entry 0====== 00:09:21.396 trtype: tcp 00:09:21.396 adrfam: ipv4 00:09:21.396 subtype: current discovery subsystem 00:09:21.396 treq: not required 00:09:21.396 portid: 0 00:09:21.396 trsvcid: 4420 00:09:21.396 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:21.396 traddr: 10.0.0.2 00:09:21.396 eflags: explicit discovery connections, duplicate discovery information 00:09:21.396 sectype: none 00:09:21.396 =====Discovery Log Entry 1====== 00:09:21.396 trtype: tcp 00:09:21.396 adrfam: ipv4 00:09:21.396 subtype: nvme subsystem 00:09:21.396 treq: not required 00:09:21.396 portid: 0 00:09:21.396 trsvcid: 4420 00:09:21.396 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:21.396 traddr: 10.0.0.2 00:09:21.396 eflags: none 00:09:21.396 sectype: none 00:09:21.396 13:38:23 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:21.396 13:38:23 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:21.396 13:38:23 -- nvmf/common.sh@511 -- # local dev _ 00:09:21.396 13:38:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:21.396 13:38:23 -- nvmf/common.sh@510 -- # nvme list 00:09:21.396 13:38:23 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:21.396 13:38:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:21.396 13:38:23 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:21.396 13:38:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:21.396 13:38:23 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:21.396 13:38:23 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.981 13:38:24 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:21.981 13:38:24 -- common/autotest_common.sh@1184 -- # local i=0 00:09:21.981 13:38:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.981 13:38:24 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:09:21.981 13:38:24 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:09:21.981 13:38:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:23.877 13:38:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:23.877 13:38:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:23.877 13:38:26 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:23.877 13:38:26 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:09:23.877 13:38:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:23.877 13:38:26 -- common/autotest_common.sh@1194 -- # return 0 00:09:23.877 13:38:26 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:23.877 13:38:26 -- nvmf/common.sh@511 -- # local dev _ 00:09:23.877 13:38:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:23.877 13:38:26 -- nvmf/common.sh@510 -- # nvme list 00:09:23.877 13:38:26 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:23.877 13:38:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:23.877 13:38:26 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:23.877 13:38:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:23.877 13:38:26 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:23.877 13:38:26 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:09:23.877 13:38:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:23.877 13:38:26 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:23.877 13:38:26 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:09:23.877 13:38:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:23.877 13:38:26 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:23.877 /dev/nvme0n1 ]] 00:09:23.877 13:38:26 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:23.877 13:38:26 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:23.877 13:38:26 -- nvmf/common.sh@511 -- # local dev _ 00:09:23.877 13:38:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:23.877 13:38:26 -- nvmf/common.sh@510 -- # nvme list 00:09:23.877 13:38:26 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:23.877 13:38:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:23.877 13:38:26 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:23.877 13:38:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:23.877 13:38:26 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:23.877 13:38:26 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:09:23.877 13:38:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:23.877 13:38:26 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:23.877 13:38:26 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:09:23.877 13:38:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:23.878 13:38:26 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:23.878 13:38:26 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.135 13:38:26 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:24.135 13:38:26 -- common/autotest_common.sh@1205 -- # local i=0 00:09:24.135 13:38:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:24.135 13:38:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.135 13:38:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:24.135 13:38:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.135 13:38:26 -- common/autotest_common.sh@1217 -- # return 0 00:09:24.135 13:38:26 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:24.135 13:38:26 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.135 13:38:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.135 13:38:26 -- common/autotest_common.sh@10 -- # set +x 00:09:24.135 13:38:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.135 13:38:26 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:24.135 13:38:26 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:24.135 13:38:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:24.135 13:38:26 -- nvmf/common.sh@117 -- # sync 00:09:24.135 13:38:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:24.135 13:38:26 -- nvmf/common.sh@120 -- # set +e 00:09:24.135 13:38:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.135 13:38:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:24.135 rmmod nvme_tcp 00:09:24.135 rmmod nvme_fabrics 00:09:24.135 rmmod nvme_keyring 00:09:24.135 13:38:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.135 13:38:26 -- nvmf/common.sh@124 -- # set -e 00:09:24.135 13:38:26 -- nvmf/common.sh@125 -- # return 0 00:09:24.135 13:38:26 -- nvmf/common.sh@478 -- # '[' -n 2549709 ']' 00:09:24.135 13:38:26 -- nvmf/common.sh@479 -- # killprocess 2549709 00:09:24.135 13:38:26 -- common/autotest_common.sh@936 -- # '[' -z 2549709 ']' 00:09:24.135 13:38:26 -- common/autotest_common.sh@940 -- # kill -0 2549709 00:09:24.135 13:38:26 -- common/autotest_common.sh@941 -- # uname 00:09:24.135 13:38:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:24.135 13:38:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2549709 00:09:24.135 13:38:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:24.135 13:38:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:24.135 13:38:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2549709' 00:09:24.135 killing process with pid 2549709 00:09:24.135 13:38:26 -- common/autotest_common.sh@955 -- # kill 2549709 00:09:24.135 13:38:26 -- common/autotest_common.sh@960 -- # wait 2549709 00:09:24.393 13:38:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:24.393 13:38:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:24.393 13:38:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:24.393 13:38:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.393 13:38:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:24.393 13:38:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.393 13:38:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.393 13:38:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.928 13:38:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:26.928 00:09:26.928 real 0m7.913s 00:09:26.928 user 0m14.070s 00:09:26.928 sys 0m2.086s 00:09:26.929 13:38:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:26.929 13:38:29 -- common/autotest_common.sh@10 -- # set +x 00:09:26.929 ************************************ 00:09:26.929 END TEST nvmf_nvme_cli 00:09:26.929 ************************************ 00:09:26.929 13:38:29 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:26.929 13:38:29 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:26.929 13:38:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:26.929 13:38:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:26.929 13:38:29 -- common/autotest_common.sh@10 -- # set +x 00:09:26.929 ************************************ 00:09:26.929 START TEST nvmf_vfio_user 00:09:26.929 ************************************ 00:09:26.929 13:38:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:26.929 * Looking for test storage... 00:09:26.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.929 13:38:29 -- nvmf/common.sh@7 -- # uname -s 00:09:26.929 13:38:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.929 13:38:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.929 13:38:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.929 13:38:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.929 13:38:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.929 13:38:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.929 13:38:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.929 13:38:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.929 13:38:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.929 13:38:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.929 13:38:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:26.929 13:38:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:26.929 13:38:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.929 13:38:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.929 13:38:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.929 13:38:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.929 13:38:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.929 13:38:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.929 13:38:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.929 13:38:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.929 13:38:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.929 13:38:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.929 13:38:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.929 13:38:29 -- paths/export.sh@5 -- # export PATH 00:09:26.929 13:38:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.929 13:38:29 -- nvmf/common.sh@47 -- # : 0 00:09:26.929 13:38:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:26.929 13:38:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:26.929 13:38:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.929 13:38:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.929 13:38:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.929 13:38:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:26.929 13:38:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:26.929 13:38:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2550600 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2550600' 00:09:26.929 Process pid: 2550600 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2550600 00:09:26.929 13:38:29 -- common/autotest_common.sh@817 -- # '[' -z 2550600 ']' 00:09:26.929 13:38:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.929 13:38:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:26.929 13:38:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.929 13:38:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:26.929 13:38:29 -- common/autotest_common.sh@10 -- # set +x 00:09:26.929 [2024-04-18 13:38:29.416672] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:09:26.929 [2024-04-18 13:38:29.416759] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.929 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.929 [2024-04-18 13:38:29.477999] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.929 [2024-04-18 13:38:29.589328] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.929 [2024-04-18 13:38:29.589392] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.929 [2024-04-18 13:38:29.589408] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.929 [2024-04-18 13:38:29.589422] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.929 [2024-04-18 13:38:29.589433] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.929 [2024-04-18 13:38:29.593203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.929 [2024-04-18 13:38:29.593261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.929 [2024-04-18 13:38:29.593345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.929 [2024-04-18 13:38:29.593348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.929 13:38:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:26.929 13:38:29 -- common/autotest_common.sh@850 -- # return 0 00:09:26.929 13:38:29 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:28.301 13:38:30 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:28.301 13:38:30 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:28.301 13:38:30 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:28.301 13:38:30 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:28.301 13:38:30 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:28.301 13:38:30 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:28.559 Malloc1 00:09:28.559 13:38:31 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:28.816 13:38:31 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:29.073 13:38:31 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:29.331 13:38:31 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:29.331 13:38:31 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:29.331 13:38:32 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:29.588 Malloc2 00:09:29.588 13:38:32 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:29.845 13:38:32 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:30.102 13:38:32 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:30.360 13:38:33 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:30.360 13:38:33 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:30.360 13:38:33 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:30.360 13:38:33 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:30.360 13:38:33 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:30.360 13:38:33 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:30.360 [2024-04-18 13:38:33.157697] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:09:30.360 [2024-04-18 13:38:33.157733] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2551021 ] 00:09:30.618 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.618 [2024-04-18 13:38:33.190423] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:30.618 [2024-04-18 13:38:33.199629] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:30.618 [2024-04-18 13:38:33.199656] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8779565000 00:09:30.618 [2024-04-18 13:38:33.200626] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:30.618 [2024-04-18 13:38:33.201634] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:30.618 [2024-04-18 13:38:33.202625] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:30.618 [2024-04-18 13:38:33.203628] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:30.618 [2024-04-18 13:38:33.204633] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:30.618 [2024-04-18 13:38:33.205634] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:30.618 [2024-04-18 13:38:33.206641] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:30.618 [2024-04-18 13:38:33.207642] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:30.618 [2024-04-18 13:38:33.208650] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:30.618 [2024-04-18 13:38:33.208673] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f877955a000 00:09:30.618 [2024-04-18 13:38:33.209817] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:30.618 [2024-04-18 13:38:33.224830] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:30.618 [2024-04-18 13:38:33.224871] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:30.618 [2024-04-18 13:38:33.229767] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:30.618 [2024-04-18 13:38:33.229818] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:30.618 [2024-04-18 13:38:33.229905] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:30.618 [2024-04-18 13:38:33.229937] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:30.618 [2024-04-18 13:38:33.229947] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:30.618 [2024-04-18 13:38:33.230764] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:30.618 [2024-04-18 13:38:33.230783] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:30.618 [2024-04-18 13:38:33.230796] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:30.618 [2024-04-18 13:38:33.231767] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:30.618 [2024-04-18 13:38:33.231784] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:30.618 [2024-04-18 13:38:33.231797] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:30.618 [2024-04-18 13:38:33.232774] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:30.618 [2024-04-18 13:38:33.232792] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:30.618 [2024-04-18 13:38:33.233779] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:30.618 [2024-04-18 13:38:33.233799] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:30.618 [2024-04-18 13:38:33.233808] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:30.618 [2024-04-18 13:38:33.233819] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:30.618 [2024-04-18 13:38:33.233933] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:30.618 [2024-04-18 13:38:33.233942] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:30.619 [2024-04-18 13:38:33.233950] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:30.619 [2024-04-18 13:38:33.234785] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:30.619 [2024-04-18 13:38:33.235785] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:30.619 [2024-04-18 13:38:33.236793] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:30.619 [2024-04-18 13:38:33.237786] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:30.619 [2024-04-18 13:38:33.237898] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:30.619 [2024-04-18 13:38:33.238800] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:30.619 [2024-04-18 13:38:33.238817] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:30.619 [2024-04-18 13:38:33.238826] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.238850] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:30.619 [2024-04-18 13:38:33.238869] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.238897] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:30.619 [2024-04-18 13:38:33.238906] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:30.619 [2024-04-18 13:38:33.238926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:30.619 [2024-04-18 13:38:33.238985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:30.619 [2024-04-18 13:38:33.239002] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:30.619 [2024-04-18 13:38:33.239011] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:30.619 [2024-04-18 13:38:33.239018] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:30.619 [2024-04-18 13:38:33.239026] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:30.619 [2024-04-18 13:38:33.239034] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:30.619 [2024-04-18 13:38:33.239042] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:30.619 [2024-04-18 13:38:33.239049] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239063] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:30.619 [2024-04-18 13:38:33.239099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:30.619 [2024-04-18 13:38:33.239119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:30.619 [2024-04-18 13:38:33.239133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:30.619 [2024-04-18 13:38:33.239144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:30.619 [2024-04-18 13:38:33.239172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:30.619 [2024-04-18 13:38:33.239189] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239207] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:30.619 [2024-04-18 13:38:33.239235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:30.619 [2024-04-18 13:38:33.239246] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:30.619 [2024-04-18 13:38:33.239255] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239270] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239282] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:30.619 [2024-04-18 13:38:33.239314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:30.619 [2024-04-18 13:38:33.239368] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239383] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239397] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:30.619 [2024-04-18 13:38:33.239406] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:30.619 [2024-04-18 13:38:33.239415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:30.619 [2024-04-18 13:38:33.239436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:30.619 [2024-04-18 13:38:33.239454] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:30.619 [2024-04-18 13:38:33.239498] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239514] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239529] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:30.619 [2024-04-18 13:38:33.239538] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:30.619 [2024-04-18 13:38:33.239562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:30.619 [2024-04-18 13:38:33.239581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:30.619 [2024-04-18 13:38:33.239603] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239618] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239630] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:30.619 [2024-04-18 13:38:33.239638] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:30.619 [2024-04-18 13:38:33.239647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:30.619 [2024-04-18 13:38:33.239660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:30.619 [2024-04-18 13:38:33.239675] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239686] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239700] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239711] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239719] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239727] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:30.619 [2024-04-18 13:38:33.239735] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:30.619 [2024-04-18 13:38:33.239743] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:30.619 [2024-04-18 13:38:33.239768] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:30.619 [2024-04-18 13:38:33.239786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:30.619 [2024-04-18 13:38:33.239805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:30.619 [2024-04-18 13:38:33.239817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:30.619 [2024-04-18 13:38:33.239832] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:30.619 [2024-04-18 13:38:33.239844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:30.619 [2024-04-18 13:38:33.239860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:30.619 [2024-04-18 13:38:33.239875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:30.619 [2024-04-18 13:38:33.239893] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:30.619 [2024-04-18 13:38:33.239902] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:30.619 [2024-04-18 13:38:33.239908] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:30.619 [2024-04-18 13:38:33.239914] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:30.619 [2024-04-18 13:38:33.239923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:30.619 [2024-04-18 13:38:33.239934] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:30.619 [2024-04-18 13:38:33.239942] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:30.619 [2024-04-18 13:38:33.239951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:30.619 [2024-04-18 13:38:33.239962] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:30.619 [2024-04-18 13:38:33.239969] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:30.619 [2024-04-18 13:38:33.239978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:30.619 [2024-04-18 13:38:33.239989] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:30.619 [2024-04-18 13:38:33.239997] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:30.619 [2024-04-18 13:38:33.240006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:30.619 [2024-04-18 13:38:33.240017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:30.619 [2024-04-18 13:38:33.240037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:30.619 [2024-04-18 13:38:33.240053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:30.619 [2024-04-18 13:38:33.240064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:30.619 ===================================================== 00:09:30.619 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:30.619 ===================================================== 00:09:30.619 Controller Capabilities/Features 00:09:30.619 ================================ 00:09:30.619 Vendor ID: 4e58 00:09:30.619 Subsystem Vendor ID: 4e58 00:09:30.619 Serial Number: SPDK1 00:09:30.619 Model Number: SPDK bdev Controller 00:09:30.619 Firmware Version: 24.05 00:09:30.619 Recommended Arb Burst: 6 00:09:30.619 IEEE OUI Identifier: 8d 6b 50 00:09:30.619 Multi-path I/O 00:09:30.619 May have multiple subsystem ports: Yes 00:09:30.619 May have multiple controllers: Yes 00:09:30.619 Associated with SR-IOV VF: No 00:09:30.619 Max Data Transfer Size: 131072 00:09:30.619 Max Number of Namespaces: 32 00:09:30.619 Max Number of I/O Queues: 127 00:09:30.619 NVMe Specification Version (VS): 1.3 00:09:30.619 NVMe Specification Version (Identify): 1.3 00:09:30.619 Maximum Queue Entries: 256 00:09:30.619 Contiguous Queues Required: Yes 00:09:30.619 Arbitration Mechanisms Supported 00:09:30.619 Weighted Round Robin: Not Supported 00:09:30.619 Vendor Specific: Not Supported 00:09:30.619 Reset Timeout: 15000 ms 00:09:30.619 Doorbell Stride: 4 bytes 00:09:30.619 NVM Subsystem Reset: Not Supported 00:09:30.619 Command Sets Supported 00:09:30.619 NVM Command Set: Supported 00:09:30.619 Boot Partition: Not Supported 00:09:30.619 Memory Page Size Minimum: 4096 bytes 00:09:30.619 Memory Page Size Maximum: 4096 bytes 00:09:30.619 Persistent Memory Region: Not Supported 00:09:30.619 Optional Asynchronous Events Supported 00:09:30.619 Namespace Attribute Notices: Supported 00:09:30.619 Firmware Activation Notices: Not Supported 00:09:30.619 ANA Change Notices: Not Supported 00:09:30.619 PLE Aggregate Log Change Notices: Not Supported 00:09:30.619 LBA Status Info Alert Notices: Not Supported 00:09:30.619 EGE Aggregate Log Change Notices: Not Supported 00:09:30.619 Normal NVM Subsystem Shutdown event: Not Supported 00:09:30.619 Zone Descriptor Change Notices: Not Supported 00:09:30.619 Discovery Log Change Notices: Not Supported 00:09:30.619 Controller Attributes 00:09:30.619 128-bit Host Identifier: Supported 00:09:30.619 Non-Operational Permissive Mode: Not Supported 00:09:30.619 NVM Sets: Not Supported 00:09:30.619 Read Recovery Levels: Not Supported 00:09:30.619 Endurance Groups: Not Supported 00:09:30.619 Predictable Latency Mode: Not Supported 00:09:30.619 Traffic Based Keep ALive: Not Supported 00:09:30.619 Namespace Granularity: Not Supported 00:09:30.619 SQ Associations: Not Supported 00:09:30.619 UUID List: Not Supported 00:09:30.619 Multi-Domain Subsystem: Not Supported 00:09:30.619 Fixed Capacity Management: Not Supported 00:09:30.619 Variable Capacity Management: Not Supported 00:09:30.619 Delete Endurance Group: Not Supported 00:09:30.619 Delete NVM Set: Not Supported 00:09:30.619 Extended LBA Formats Supported: Not Supported 00:09:30.619 Flexible Data Placement Supported: Not Supported 00:09:30.619 00:09:30.619 Controller Memory Buffer Support 00:09:30.619 ================================ 00:09:30.619 Supported: No 00:09:30.619 00:09:30.619 Persistent Memory Region Support 00:09:30.619 ================================ 00:09:30.619 Supported: No 00:09:30.619 00:09:30.619 Admin Command Set Attributes 00:09:30.619 ============================ 00:09:30.619 Security Send/Receive: Not Supported 00:09:30.619 Format NVM: Not Supported 00:09:30.619 Firmware Activate/Download: Not Supported 00:09:30.619 Namespace Management: Not Supported 00:09:30.619 Device Self-Test: Not Supported 00:09:30.619 Directives: Not Supported 00:09:30.619 NVMe-MI: Not Supported 00:09:30.619 Virtualization Management: Not Supported 00:09:30.619 Doorbell Buffer Config: Not Supported 00:09:30.619 Get LBA Status Capability: Not Supported 00:09:30.619 Command & Feature Lockdown Capability: Not Supported 00:09:30.619 Abort Command Limit: 4 00:09:30.619 Async Event Request Limit: 4 00:09:30.619 Number of Firmware Slots: N/A 00:09:30.619 Firmware Slot 1 Read-Only: N/A 00:09:30.619 Firmware Activation Without Reset: N/A 00:09:30.619 Multiple Update Detection Support: N/A 00:09:30.619 Firmware Update Granularity: No Information Provided 00:09:30.619 Per-Namespace SMART Log: No 00:09:30.619 Asymmetric Namespace Access Log Page: Not Supported 00:09:30.619 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:30.619 Command Effects Log Page: Supported 00:09:30.619 Get Log Page Extended Data: Supported 00:09:30.619 Telemetry Log Pages: Not Supported 00:09:30.619 Persistent Event Log Pages: Not Supported 00:09:30.619 Supported Log Pages Log Page: May Support 00:09:30.619 Commands Supported & Effects Log Page: Not Supported 00:09:30.619 Feature Identifiers & Effects Log Page:May Support 00:09:30.619 NVMe-MI Commands & Effects Log Page: May Support 00:09:30.619 Data Area 4 for Telemetry Log: Not Supported 00:09:30.619 Error Log Page Entries Supported: 128 00:09:30.619 Keep Alive: Supported 00:09:30.619 Keep Alive Granularity: 10000 ms 00:09:30.619 00:09:30.619 NVM Command Set Attributes 00:09:30.619 ========================== 00:09:30.619 Submission Queue Entry Size 00:09:30.619 Max: 64 00:09:30.619 Min: 64 00:09:30.619 Completion Queue Entry Size 00:09:30.619 Max: 16 00:09:30.619 Min: 16 00:09:30.619 Number of Namespaces: 32 00:09:30.619 Compare Command: Supported 00:09:30.619 Write Uncorrectable Command: Not Supported 00:09:30.619 Dataset Management Command: Supported 00:09:30.619 Write Zeroes Command: Supported 00:09:30.619 Set Features Save Field: Not Supported 00:09:30.619 Reservations: Not Supported 00:09:30.619 Timestamp: Not Supported 00:09:30.619 Copy: Supported 00:09:30.619 Volatile Write Cache: Present 00:09:30.619 Atomic Write Unit (Normal): 1 00:09:30.619 Atomic Write Unit (PFail): 1 00:09:30.619 Atomic Compare & Write Unit: 1 00:09:30.619 Fused Compare & Write: Supported 00:09:30.619 Scatter-Gather List 00:09:30.619 SGL Command Set: Supported (Dword aligned) 00:09:30.619 SGL Keyed: Not Supported 00:09:30.619 SGL Bit Bucket Descriptor: Not Supported 00:09:30.619 SGL Metadata Pointer: Not Supported 00:09:30.619 Oversized SGL: Not Supported 00:09:30.620 SGL Metadata Address: Not Supported 00:09:30.620 SGL Offset: Not Supported 00:09:30.620 Transport SGL Data Block: Not Supported 00:09:30.620 Replay Protected Memory Block: Not Supported 00:09:30.620 00:09:30.620 Firmware Slot Information 00:09:30.620 ========================= 00:09:30.620 Active slot: 1 00:09:30.620 Slot 1 Firmware Revision: 24.05 00:09:30.620 00:09:30.620 00:09:30.620 Commands Supported and Effects 00:09:30.620 ============================== 00:09:30.620 Admin Commands 00:09:30.620 -------------- 00:09:30.620 Get Log Page (02h): Supported 00:09:30.620 Identify (06h): Supported 00:09:30.620 Abort (08h): Supported 00:09:30.620 Set Features (09h): Supported 00:09:30.620 Get Features (0Ah): Supported 00:09:30.620 Asynchronous Event Request (0Ch): Supported 00:09:30.620 Keep Alive (18h): Supported 00:09:30.620 I/O Commands 00:09:30.620 ------------ 00:09:30.620 Flush (00h): Supported LBA-Change 00:09:30.620 Write (01h): Supported LBA-Change 00:09:30.620 Read (02h): Supported 00:09:30.620 Compare (05h): Supported 00:09:30.620 Write Zeroes (08h): Supported LBA-Change 00:09:30.620 Dataset Management (09h): Supported LBA-Change 00:09:30.620 Copy (19h): Supported LBA-Change 00:09:30.620 Unknown (79h): Supported LBA-Change 00:09:30.620 Unknown (7Ah): Supported 00:09:30.620 00:09:30.620 Error Log 00:09:30.620 ========= 00:09:30.620 00:09:30.620 Arbitration 00:09:30.620 =========== 00:09:30.620 Arbitration Burst: 1 00:09:30.620 00:09:30.620 Power Management 00:09:30.620 ================ 00:09:30.620 Number of Power States: 1 00:09:30.620 Current Power State: Power State #0 00:09:30.620 Power State #0: 00:09:30.620 Max Power: 0.00 W 00:09:30.620 Non-Operational State: Operational 00:09:30.620 Entry Latency: Not Reported 00:09:30.620 Exit Latency: Not Reported 00:09:30.620 Relative Read Throughput: 0 00:09:30.620 Relative Read Latency: 0 00:09:30.620 Relative Write Throughput: 0 00:09:30.620 Relative Write Latency: 0 00:09:30.620 Idle Power: Not Reported 00:09:30.620 Active Power: Not Reported 00:09:30.620 Non-Operational Permissive Mode: Not Supported 00:09:30.620 00:09:30.620 Health Information 00:09:30.620 ================== 00:09:30.620 Critical Warnings: 00:09:30.620 Available Spare Space: OK 00:09:30.620 Temperature: OK 00:09:30.620 Device Reliability: OK 00:09:30.620 Read Only: No 00:09:30.620 Volatile Memory Backup: OK 00:09:30.620 Current Temperature: 0 Kelvin (-2[2024-04-18 13:38:33.240217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:30.620 [2024-04-18 13:38:33.240235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:30.620 [2024-04-18 13:38:33.240275] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:30.620 [2024-04-18 13:38:33.240293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.620 [2024-04-18 13:38:33.240304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.620 [2024-04-18 13:38:33.240315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.620 [2024-04-18 13:38:33.240324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.620 [2024-04-18 13:38:33.244189] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:30.620 [2024-04-18 13:38:33.244212] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:30.620 [2024-04-18 13:38:33.244830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:30.620 [2024-04-18 13:38:33.244911] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:30.620 [2024-04-18 13:38:33.244924] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:30.620 [2024-04-18 13:38:33.245846] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:30.620 [2024-04-18 13:38:33.245869] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:30.620 [2024-04-18 13:38:33.245923] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:30.620 [2024-04-18 13:38:33.247882] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:30.620 73 Celsius) 00:09:30.620 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:30.620 Available Spare: 0% 00:09:30.620 Available Spare Threshold: 0% 00:09:30.620 Life Percentage Used: 0% 00:09:30.620 Data Units Read: 0 00:09:30.620 Data Units Written: 0 00:09:30.620 Host Read Commands: 0 00:09:30.620 Host Write Commands: 0 00:09:30.620 Controller Busy Time: 0 minutes 00:09:30.620 Power Cycles: 0 00:09:30.620 Power On Hours: 0 hours 00:09:30.620 Unsafe Shutdowns: 0 00:09:30.620 Unrecoverable Media Errors: 0 00:09:30.620 Lifetime Error Log Entries: 0 00:09:30.620 Warning Temperature Time: 0 minutes 00:09:30.620 Critical Temperature Time: 0 minutes 00:09:30.620 00:09:30.620 Number of Queues 00:09:30.620 ================ 00:09:30.620 Number of I/O Submission Queues: 127 00:09:30.620 Number of I/O Completion Queues: 127 00:09:30.620 00:09:30.620 Active Namespaces 00:09:30.620 ================= 00:09:30.620 Namespace ID:1 00:09:30.620 Error Recovery Timeout: Unlimited 00:09:30.620 Command Set Identifier: NVM (00h) 00:09:30.620 Deallocate: Supported 00:09:30.620 Deallocated/Unwritten Error: Not Supported 00:09:30.620 Deallocated Read Value: Unknown 00:09:30.620 Deallocate in Write Zeroes: Not Supported 00:09:30.620 Deallocated Guard Field: 0xFFFF 00:09:30.620 Flush: Supported 00:09:30.620 Reservation: Supported 00:09:30.620 Namespace Sharing Capabilities: Multiple Controllers 00:09:30.620 Size (in LBAs): 131072 (0GiB) 00:09:30.620 Capacity (in LBAs): 131072 (0GiB) 00:09:30.620 Utilization (in LBAs): 131072 (0GiB) 00:09:30.620 NGUID: CDA5039D2F114F738B33D2D7ED1681E7 00:09:30.620 UUID: cda5039d-2f11-4f73-8b33-d2d7ed1681e7 00:09:30.620 Thin Provisioning: Not Supported 00:09:30.620 Per-NS Atomic Units: Yes 00:09:30.620 Atomic Boundary Size (Normal): 0 00:09:30.620 Atomic Boundary Size (PFail): 0 00:09:30.620 Atomic Boundary Offset: 0 00:09:30.620 Maximum Single Source Range Length: 65535 00:09:30.620 Maximum Copy Length: 65535 00:09:30.620 Maximum Source Range Count: 1 00:09:30.620 NGUID/EUI64 Never Reused: No 00:09:30.620 Namespace Write Protected: No 00:09:30.620 Number of LBA Formats: 1 00:09:30.620 Current LBA Format: LBA Format #00 00:09:30.620 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:30.620 00:09:30.620 13:38:33 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:30.620 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.877 [2024-04-18 13:38:33.478992] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:36.136 [2024-04-18 13:38:38.502022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:36.136 Initializing NVMe Controllers 00:09:36.136 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:36.136 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:36.136 Initialization complete. Launching workers. 00:09:36.136 ======================================================== 00:09:36.136 Latency(us) 00:09:36.136 Device Information : IOPS MiB/s Average min max 00:09:36.136 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33219.60 129.76 3856.00 1207.27 8113.63 00:09:36.136 ======================================================== 00:09:36.136 Total : 33219.60 129.76 3856.00 1207.27 8113.63 00:09:36.136 00:09:36.136 13:38:38 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:36.136 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.136 [2024-04-18 13:38:38.734115] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:41.398 [2024-04-18 13:38:43.777367] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:41.398 Initializing NVMe Controllers 00:09:41.398 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:41.398 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:41.398 Initialization complete. Launching workers. 00:09:41.398 ======================================================== 00:09:41.398 Latency(us) 00:09:41.398 Device Information : IOPS MiB/s Average min max 00:09:41.398 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16038.09 62.65 7986.25 6981.44 11976.73 00:09:41.398 ======================================================== 00:09:41.398 Total : 16038.09 62.65 7986.25 6981.44 11976.73 00:09:41.398 00:09:41.398 13:38:43 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:41.398 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.398 [2024-04-18 13:38:43.991395] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:46.711 [2024-04-18 13:38:49.065580] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:46.711 Initializing NVMe Controllers 00:09:46.711 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:46.711 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:46.711 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:09:46.711 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:09:46.711 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:09:46.711 Initialization complete. Launching workers. 00:09:46.711 Starting thread on core 2 00:09:46.711 Starting thread on core 3 00:09:46.711 Starting thread on core 1 00:09:46.711 13:38:49 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:09:46.711 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.711 [2024-04-18 13:38:49.364002] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:49.993 [2024-04-18 13:38:52.431074] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:49.993 Initializing NVMe Controllers 00:09:49.993 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:49.993 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:49.993 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:09:49.993 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:09:49.993 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:09:49.993 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:09:49.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:09:49.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:09:49.993 Initialization complete. Launching workers. 00:09:49.993 Starting thread on core 1 with urgent priority queue 00:09:49.993 Starting thread on core 2 with urgent priority queue 00:09:49.993 Starting thread on core 3 with urgent priority queue 00:09:49.993 Starting thread on core 0 with urgent priority queue 00:09:49.993 SPDK bdev Controller (SPDK1 ) core 0: 4783.33 IO/s 20.91 secs/100000 ios 00:09:49.993 SPDK bdev Controller (SPDK1 ) core 1: 4882.33 IO/s 20.48 secs/100000 ios 00:09:49.993 SPDK bdev Controller (SPDK1 ) core 2: 4988.33 IO/s 20.05 secs/100000 ios 00:09:49.993 SPDK bdev Controller (SPDK1 ) core 3: 5108.33 IO/s 19.58 secs/100000 ios 00:09:49.993 ======================================================== 00:09:49.993 00:09:49.993 13:38:52 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:49.993 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.993 [2024-04-18 13:38:52.732707] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:49.993 [2024-04-18 13:38:52.766346] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:50.250 Initializing NVMe Controllers 00:09:50.250 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:50.250 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:50.250 Namespace ID: 1 size: 0GB 00:09:50.250 Initialization complete. 00:09:50.250 INFO: using host memory buffer for IO 00:09:50.250 Hello world! 00:09:50.250 13:38:52 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:50.250 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.507 [2024-04-18 13:38:53.070658] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:51.439 Initializing NVMe Controllers 00:09:51.439 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:51.439 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:51.439 Initialization complete. Launching workers. 00:09:51.439 submit (in ns) avg, min, max = 9215.0, 3510.0, 4017104.4 00:09:51.439 complete (in ns) avg, min, max = 24925.1, 2044.4, 4017218.9 00:09:51.439 00:09:51.439 Submit histogram 00:09:51.439 ================ 00:09:51.439 Range in us Cumulative Count 00:09:51.439 3.508 - 3.532: 0.0515% ( 7) 00:09:51.439 3.532 - 3.556: 1.4433% ( 189) 00:09:51.439 3.556 - 3.579: 6.6495% ( 707) 00:09:51.439 3.579 - 3.603: 16.6937% ( 1364) 00:09:51.439 3.603 - 3.627: 27.9676% ( 1531) 00:09:51.439 3.627 - 3.650: 40.3019% ( 1675) 00:09:51.439 3.650 - 3.674: 47.3564% ( 958) 00:09:51.439 3.674 - 3.698: 52.3122% ( 673) 00:09:51.439 3.698 - 3.721: 56.8409% ( 615) 00:09:51.439 3.721 - 3.745: 61.1561% ( 586) 00:09:51.439 3.745 - 3.769: 64.4993% ( 454) 00:09:51.439 3.769 - 3.793: 66.9514% ( 333) 00:09:51.439 3.793 - 3.816: 70.0589% ( 422) 00:09:51.439 3.816 - 3.840: 74.1384% ( 554) 00:09:51.439 3.840 - 3.864: 79.9116% ( 784) 00:09:51.439 3.864 - 3.887: 83.9175% ( 544) 00:09:51.439 3.887 - 3.911: 86.1193% ( 299) 00:09:51.439 3.911 - 3.935: 87.9823% ( 253) 00:09:51.439 3.935 - 3.959: 89.7054% ( 234) 00:09:51.439 3.959 - 3.982: 91.1340% ( 194) 00:09:51.439 3.982 - 4.006: 92.2091% ( 146) 00:09:51.439 4.006 - 4.030: 92.9308% ( 98) 00:09:51.439 4.030 - 4.053: 93.8881% ( 130) 00:09:51.439 4.053 - 4.077: 94.8822% ( 135) 00:09:51.439 4.077 - 4.101: 95.5817% ( 95) 00:09:51.439 4.101 - 4.124: 95.9794% ( 54) 00:09:51.439 4.124 - 4.148: 96.3255% ( 47) 00:09:51.439 4.148 - 4.172: 96.5538% ( 31) 00:09:51.439 4.172 - 4.196: 96.6568% ( 14) 00:09:51.439 4.196 - 4.219: 96.8189% ( 22) 00:09:51.439 4.219 - 4.243: 96.8851% ( 9) 00:09:51.439 4.243 - 4.267: 97.0250% ( 19) 00:09:51.439 4.267 - 4.290: 97.1944% ( 23) 00:09:51.439 4.290 - 4.314: 97.2828% ( 12) 00:09:51.439 4.314 - 4.338: 97.3490% ( 9) 00:09:51.439 4.338 - 4.361: 97.4521% ( 14) 00:09:51.439 4.361 - 4.385: 97.5037% ( 7) 00:09:51.439 4.385 - 4.409: 97.5479% ( 6) 00:09:51.439 4.409 - 4.433: 97.5700% ( 3) 00:09:51.439 4.456 - 4.480: 97.5847% ( 2) 00:09:51.439 4.480 - 4.504: 97.5920% ( 1) 00:09:51.439 4.504 - 4.527: 97.6068% ( 2) 00:09:51.439 4.551 - 4.575: 97.6215% ( 2) 00:09:51.439 4.622 - 4.646: 97.6289% ( 1) 00:09:51.439 4.670 - 4.693: 97.6362% ( 1) 00:09:51.439 4.693 - 4.717: 97.6510% ( 2) 00:09:51.439 4.717 - 4.741: 97.6804% ( 4) 00:09:51.439 4.741 - 4.764: 97.7467% ( 9) 00:09:51.439 4.764 - 4.788: 97.7761% ( 4) 00:09:51.439 4.788 - 4.812: 97.8277% ( 7) 00:09:51.439 4.812 - 4.836: 97.8645% ( 5) 00:09:51.439 4.836 - 4.859: 97.8866% ( 3) 00:09:51.439 4.859 - 4.883: 97.9087% ( 3) 00:09:51.439 4.883 - 4.907: 97.9308% ( 3) 00:09:51.440 4.907 - 4.930: 97.9823% ( 7) 00:09:51.440 4.930 - 4.954: 98.0118% ( 4) 00:09:51.440 4.954 - 4.978: 98.0339% ( 3) 00:09:51.440 4.978 - 5.001: 98.1001% ( 9) 00:09:51.440 5.001 - 5.025: 98.1296% ( 4) 00:09:51.440 5.025 - 5.049: 98.1591% ( 4) 00:09:51.440 5.049 - 5.073: 98.1885% ( 4) 00:09:51.440 5.073 - 5.096: 98.1959% ( 1) 00:09:51.440 5.096 - 5.120: 98.2327% ( 5) 00:09:51.440 5.120 - 5.144: 98.2401% ( 1) 00:09:51.440 5.144 - 5.167: 98.2622% ( 3) 00:09:51.440 5.167 - 5.191: 98.2842% ( 3) 00:09:51.440 5.191 - 5.215: 98.2990% ( 2) 00:09:51.440 5.215 - 5.239: 98.3137% ( 2) 00:09:51.440 5.262 - 5.286: 98.3284% ( 2) 00:09:51.440 5.286 - 5.310: 98.3358% ( 1) 00:09:51.440 5.310 - 5.333: 98.3505% ( 2) 00:09:51.440 5.428 - 5.452: 98.3579% ( 1) 00:09:51.440 5.452 - 5.476: 98.3652% ( 1) 00:09:51.440 5.570 - 5.594: 98.3726% ( 1) 00:09:51.440 5.997 - 6.021: 98.3800% ( 1) 00:09:51.440 6.258 - 6.305: 98.3873% ( 1) 00:09:51.440 6.495 - 6.542: 98.3947% ( 1) 00:09:51.440 6.590 - 6.637: 98.4094% ( 2) 00:09:51.440 6.732 - 6.779: 98.4168% ( 1) 00:09:51.440 6.874 - 6.921: 98.4315% ( 2) 00:09:51.440 6.969 - 7.016: 98.4389% ( 1) 00:09:51.440 7.016 - 7.064: 98.4462% ( 1) 00:09:51.440 7.064 - 7.111: 98.4536% ( 1) 00:09:51.440 7.159 - 7.206: 98.4610% ( 1) 00:09:51.440 7.206 - 7.253: 98.4757% ( 2) 00:09:51.440 7.253 - 7.301: 98.4831% ( 1) 00:09:51.440 7.301 - 7.348: 98.4904% ( 1) 00:09:51.440 7.396 - 7.443: 98.4978% ( 1) 00:09:51.440 7.633 - 7.680: 98.5052% ( 1) 00:09:51.440 7.680 - 7.727: 98.5199% ( 2) 00:09:51.440 7.727 - 7.775: 98.5272% ( 1) 00:09:51.440 7.775 - 7.822: 98.5346% ( 1) 00:09:51.440 7.822 - 7.870: 98.5420% ( 1) 00:09:51.440 7.870 - 7.917: 98.5641% ( 3) 00:09:51.440 7.917 - 7.964: 98.5714% ( 1) 00:09:51.440 7.964 - 8.012: 98.5862% ( 2) 00:09:51.440 8.012 - 8.059: 98.5935% ( 1) 00:09:51.440 8.059 - 8.107: 98.6082% ( 2) 00:09:51.440 8.154 - 8.201: 98.6156% ( 1) 00:09:51.440 8.201 - 8.249: 98.6230% ( 1) 00:09:51.440 8.296 - 8.344: 98.6303% ( 1) 00:09:51.440 8.391 - 8.439: 98.6377% ( 1) 00:09:51.440 8.439 - 8.486: 98.6598% ( 3) 00:09:51.440 8.628 - 8.676: 98.6672% ( 1) 00:09:51.440 8.676 - 8.723: 98.6745% ( 1) 00:09:51.440 8.770 - 8.818: 98.6819% ( 1) 00:09:51.440 8.865 - 8.913: 98.6892% ( 1) 00:09:51.440 8.960 - 9.007: 98.6966% ( 1) 00:09:51.440 9.007 - 9.055: 98.7113% ( 2) 00:09:51.440 9.055 - 9.102: 98.7261% ( 2) 00:09:51.440 9.102 - 9.150: 98.7334% ( 1) 00:09:51.440 9.766 - 9.813: 98.7408% ( 1) 00:09:51.440 9.956 - 10.003: 98.7482% ( 1) 00:09:51.440 10.003 - 10.050: 98.7555% ( 1) 00:09:51.440 10.193 - 10.240: 98.7629% ( 1) 00:09:51.440 10.619 - 10.667: 98.7703% ( 1) 00:09:51.440 10.809 - 10.856: 98.7850% ( 2) 00:09:51.440 10.856 - 10.904: 98.7923% ( 1) 00:09:51.440 10.951 - 10.999: 98.7997% ( 1) 00:09:51.440 10.999 - 11.046: 98.8144% ( 2) 00:09:51.440 11.283 - 11.330: 98.8218% ( 1) 00:09:51.440 11.378 - 11.425: 98.8292% ( 1) 00:09:51.440 11.425 - 11.473: 98.8365% ( 1) 00:09:51.440 11.473 - 11.520: 98.8439% ( 1) 00:09:51.440 11.520 - 11.567: 98.8513% ( 1) 00:09:51.440 11.615 - 11.662: 98.8586% ( 1) 00:09:51.440 11.804 - 11.852: 98.8660% ( 1) 00:09:51.440 11.899 - 11.947: 98.8733% ( 1) 00:09:51.440 12.231 - 12.326: 98.8807% ( 1) 00:09:51.440 12.421 - 12.516: 98.8881% ( 1) 00:09:51.440 12.610 - 12.705: 98.8954% ( 1) 00:09:51.440 13.179 - 13.274: 98.9028% ( 1) 00:09:51.440 13.748 - 13.843: 98.9102% ( 1) 00:09:51.440 13.938 - 14.033: 98.9249% ( 2) 00:09:51.440 14.601 - 14.696: 98.9323% ( 1) 00:09:51.440 14.981 - 15.076: 98.9396% ( 1) 00:09:51.440 17.161 - 17.256: 98.9470% ( 1) 00:09:51.440 17.351 - 17.446: 98.9838% ( 5) 00:09:51.440 17.446 - 17.541: 99.0206% ( 5) 00:09:51.440 17.541 - 17.636: 99.0574% ( 5) 00:09:51.440 17.636 - 17.730: 99.1311% ( 10) 00:09:51.440 17.730 - 17.825: 99.1753% ( 6) 00:09:51.440 17.825 - 17.920: 99.2194% ( 6) 00:09:51.440 17.920 - 18.015: 99.2931% ( 10) 00:09:51.440 18.015 - 18.110: 99.3667% ( 10) 00:09:51.440 18.110 - 18.204: 99.4404% ( 10) 00:09:51.440 18.204 - 18.299: 99.5214% ( 11) 00:09:51.440 18.299 - 18.394: 99.5655% ( 6) 00:09:51.440 18.394 - 18.489: 99.6024% ( 5) 00:09:51.440 18.489 - 18.584: 99.6392% ( 5) 00:09:51.440 18.584 - 18.679: 99.6686% ( 4) 00:09:51.440 18.679 - 18.773: 99.7054% ( 5) 00:09:51.440 18.773 - 18.868: 99.7128% ( 1) 00:09:51.440 18.868 - 18.963: 99.7349% ( 3) 00:09:51.440 18.963 - 19.058: 99.7496% ( 2) 00:09:51.440 19.058 - 19.153: 99.7791% ( 4) 00:09:51.440 19.342 - 19.437: 99.7938% ( 2) 00:09:51.440 19.437 - 19.532: 99.8012% ( 1) 00:09:51.440 19.532 - 19.627: 99.8159% ( 2) 00:09:51.440 19.627 - 19.721: 99.8233% ( 1) 00:09:51.440 19.721 - 19.816: 99.8306% ( 1) 00:09:51.440 19.816 - 19.911: 99.8380% ( 1) 00:09:51.440 20.385 - 20.480: 99.8454% ( 1) 00:09:51.440 22.187 - 22.281: 99.8527% ( 1) 00:09:51.440 28.634 - 28.824: 99.8601% ( 1) 00:09:51.440 42.477 - 42.667: 99.8675% ( 1) 00:09:51.440 3980.705 - 4004.978: 99.9558% ( 12) 00:09:51.440 4004.978 - 4029.250: 100.0000% ( 6) 00:09:51.440 00:09:51.440 Complete histogram 00:09:51.440 ================== 00:09:51.440 Range in us Cumulative Count 00:09:51.440 2.039 - 2.050: 0.6406% ( 87) 00:09:51.440 2.050 - 2.062: 9.4183% ( 1192) 00:09:51.440 2.062 - 2.074: 14.0869% ( 634) 00:09:51.440 2.074 - 2.086: 24.1679% ( 1369) 00:09:51.440 2.086 - 2.098: 52.1281% ( 3797) 00:09:51.440 2.098 - 2.110: 59.8306% ( 1046) 00:09:51.440 2.110 - 2.121: 63.2916% ( 470) 00:09:51.440 2.121 - 2.133: 67.0398% ( 509) 00:09:51.440 2.133 - 2.145: 68.0633% ( 139) 00:09:51.440 2.145 - 2.157: 72.6804% ( 627) 00:09:51.440 2.157 - 2.169: 80.7953% ( 1102) 00:09:51.440 2.169 - 2.181: 82.9161% ( 288) 00:09:51.440 2.181 - 2.193: 84.1237% ( 164) 00:09:51.440 2.193 - 2.204: 85.7953% ( 227) 00:09:51.440 2.204 - 2.216: 86.6200% ( 112) 00:09:51.440 2.216 - 2.228: 88.6892% ( 281) 00:09:51.440 2.228 - 2.240: 92.3122% ( 492) 00:09:51.440 2.240 - 2.252: 93.7629% ( 197) 00:09:51.440 2.252 - 2.264: 94.1900% ( 58) 00:09:51.440 2.264 - 2.276: 94.4993% ( 42) 00:09:51.440 2.276 - 2.287: 94.7644% ( 36) 00:09:51.440 2.287 - 2.299: 94.9926% ( 31) 00:09:51.440 2.299 - 2.311: 95.2577% ( 36) 00:09:51.440 2.311 - 2.323: 95.5817% ( 44) 00:09:51.440 2.323 - 2.335: 95.6996% ( 16) 00:09:51.440 2.335 - 2.347: 95.7806% ( 11) 00:09:51.440 2.347 - 2.359: 95.9647% ( 25) 00:09:51.440 2.359 - 2.370: 96.1193% ( 21) 00:09:51.440 2.370 - 2.382: 96.4065% ( 39) 00:09:51.440 2.382 - 2.394: 96.8189% ( 56) 00:09:51.440 2.394 - 2.406: 97.1060% ( 39) 00:09:51.440 2.406 - 2.418: 97.4080% ( 41) 00:09:51.440 2.418 - 2.430: 97.6804% ( 37) 00:09:51.440 2.430 - 2.441: 97.8424% ( 22) 00:09:51.440 2.441 - 2.453: 97.9529% ( 15) 00:09:51.440 2.453 - 2.465: 98.0560% ( 14) 00:09:51.440 2.465 - 2.477: 98.1370% ( 11) 00:09:51.440 2.477 - 2.489: 98.1664% ( 4) 00:09:51.440 2.489 - 2.501: 98.2032% ( 5) 00:09:51.440 2.501 - 2.513: 98.2622% ( 8) 00:09:51.440 2.513 - 2.524: 98.3063% ( 6) 00:09:51.440 2.524 - 2.536: 98.3284% ( 3) 00:09:51.440 2.536 - 2.548: 98.3358% ( 1) 00:09:51.440 2.548 - 2.560: 98.3432% ( 1) 00:09:51.440 2.560 - 2.572: 98.3505% ( 1) 00:09:51.440 2.584 - 2.596: 98.3652% ( 2) 00:09:51.440 2.596 - 2.607: 98.3800% ( 2) 00:09:51.440 2.643 - 2.655: 98.3873% ( 1) 00:09:51.440 2.655 - 2.667: 98.4021% ( 2) 00:09:51.440 2.702 - 2.714: 98.4094% ( 1) 00:09:51.440 2.750 - 2.761: 98.4168% ( 1) 00:09:51.440 2.761 - 2.773: 98.4242% ( 1) 00:09:51.440 3.129 - 3.153: 98.4315% ( 1) 00:09:51.440 3.224 - 3.247: 98.4389% ( 1) 00:09:51.440 3.247 - 3.271: 98.4462% ( 1) 00:09:51.440 3.342 - 3.366: 98.4536% ( 1) 00:09:51.440 3.413 - 3.437: 98.4610% ( 1) 00:09:51.440 3.437 - 3.461: 98.4757% ( 2) 00:09:51.440 3.461 - 3.484: 98.5125% ( 5) 00:09:51.440 3.484 - 3.508: 98.5199% ( 1) 00:09:51.440 3.556 - 3.579: 98.5272% ( 1) 00:09:51.440 3.579 - 3.603: 98.5346% ( 1) 00:09:51.440 3.603 - 3.627: 98.5420% ( 1) 00:09:51.440 3.627 - 3.650: 98.5567% ( 2) 00:09:51.440 3.674 - 3.698: 98.5641% ( 1) 00:09:51.440 3.793 - 3.816: 98.5714% ( 1) 00:09:51.440 3.840 - 3.864: 98.5788% ( 1) 00:09:51.440 3.864 - 3.887: 98.5862% ( 1) 00:09:51.440 3.911 - 3.935: 98.6009% ( 2) 00:09:51.440 4.053 - 4.077: 98.6082% ( 1) 00:09:51.440 6.068 - 6.116: 98.6156% ( 1) 00:09:51.440 6.116 - 6.163: 98.6230% ( 1) 00:09:51.440 6.210 - 6.258: 98.6377% ( 2) 00:09:51.441 6.258 - 6.305: 98.6451% ( 1) 00:09:51.441 6.353 - 6.400: 98.6524% ( 1) 00:09:51.441 6.447 - 6.495: 98.6672% ( 2) 00:09:51.441 6.495 - 6.542: 98.6745% ( 1) 00:09:51.441 6.542 - 6.590: 98.6892% ( 2) 00:09:51.441 6.637 - 6.684: 98.6966% ( 1) 00:09:51.441 6.684 - 6.732: 98.7040% ( 1) 00:09:51.441 6.732 - 6.779: 98.7113% ( 1) 00:09:51.441 6.827 - 6.874: 98.7261% ( 2) 00:09:51.441 7.064 - 7.111: 98.7334% ( 1) 00:09:51.441 7.111 - 7.159: 9[2024-04-18 13:38:54.092886] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:51.441 8.7482% ( 2) 00:09:51.441 7.159 - 7.206: 98.7629% ( 2) 00:09:51.441 7.538 - 7.585: 98.7703% ( 1) 00:09:51.441 7.633 - 7.680: 98.7776% ( 1) 00:09:51.441 8.154 - 8.201: 98.7850% ( 1) 00:09:51.441 8.723 - 8.770: 98.7923% ( 1) 00:09:51.441 9.339 - 9.387: 98.7997% ( 1) 00:09:51.441 9.861 - 9.908: 98.8071% ( 1) 00:09:51.441 15.455 - 15.550: 98.8144% ( 1) 00:09:51.441 15.644 - 15.739: 98.8365% ( 3) 00:09:51.441 15.739 - 15.834: 98.8586% ( 3) 00:09:51.441 15.834 - 15.929: 98.8881% ( 4) 00:09:51.441 15.929 - 16.024: 98.9028% ( 2) 00:09:51.441 16.024 - 16.119: 98.9470% ( 6) 00:09:51.441 16.119 - 16.213: 98.9691% ( 3) 00:09:51.441 16.213 - 16.308: 98.9912% ( 3) 00:09:51.441 16.308 - 16.403: 99.0353% ( 6) 00:09:51.441 16.403 - 16.498: 99.0869% ( 7) 00:09:51.441 16.498 - 16.593: 99.1311% ( 6) 00:09:51.441 16.593 - 16.687: 99.1605% ( 4) 00:09:51.441 16.687 - 16.782: 99.1826% ( 3) 00:09:51.441 16.782 - 16.877: 99.2047% ( 3) 00:09:51.441 16.877 - 16.972: 99.2415% ( 5) 00:09:51.441 16.972 - 17.067: 99.2931% ( 7) 00:09:51.441 17.067 - 17.161: 99.3152% ( 3) 00:09:51.441 17.161 - 17.256: 99.3373% ( 3) 00:09:51.441 17.256 - 17.351: 99.3667% ( 4) 00:09:51.441 17.351 - 17.446: 99.3741% ( 1) 00:09:51.441 17.446 - 17.541: 99.3962% ( 3) 00:09:51.441 17.920 - 18.015: 99.4109% ( 2) 00:09:51.441 18.204 - 18.299: 99.4183% ( 1) 00:09:51.441 24.841 - 25.031: 99.4256% ( 1) 00:09:51.441 28.065 - 28.255: 99.4330% ( 1) 00:09:51.441 3980.705 - 4004.978: 99.8012% ( 50) 00:09:51.441 4004.978 - 4029.250: 100.0000% ( 27) 00:09:51.441 00:09:51.441 13:38:54 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:09:51.441 13:38:54 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:51.441 13:38:54 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:09:51.441 13:38:54 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:09:51.441 13:38:54 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:51.698 [2024-04-18 13:38:54.412554] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:09:51.698 [ 00:09:51.698 { 00:09:51.698 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:51.698 "subtype": "Discovery", 00:09:51.698 "listen_addresses": [], 00:09:51.698 "allow_any_host": true, 00:09:51.698 "hosts": [] 00:09:51.698 }, 00:09:51.698 { 00:09:51.698 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:51.698 "subtype": "NVMe", 00:09:51.698 "listen_addresses": [ 00:09:51.698 { 00:09:51.698 "transport": "VFIOUSER", 00:09:51.698 "trtype": "VFIOUSER", 00:09:51.698 "adrfam": "IPv4", 00:09:51.698 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:51.698 "trsvcid": "0" 00:09:51.698 } 00:09:51.698 ], 00:09:51.698 "allow_any_host": true, 00:09:51.698 "hosts": [], 00:09:51.698 "serial_number": "SPDK1", 00:09:51.698 "model_number": "SPDK bdev Controller", 00:09:51.698 "max_namespaces": 32, 00:09:51.698 "min_cntlid": 1, 00:09:51.698 "max_cntlid": 65519, 00:09:51.698 "namespaces": [ 00:09:51.698 { 00:09:51.698 "nsid": 1, 00:09:51.698 "bdev_name": "Malloc1", 00:09:51.698 "name": "Malloc1", 00:09:51.699 "nguid": "CDA5039D2F114F738B33D2D7ED1681E7", 00:09:51.699 "uuid": "cda5039d-2f11-4f73-8b33-d2d7ed1681e7" 00:09:51.699 } 00:09:51.699 ] 00:09:51.699 }, 00:09:51.699 { 00:09:51.699 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:51.699 "subtype": "NVMe", 00:09:51.699 "listen_addresses": [ 00:09:51.699 { 00:09:51.699 "transport": "VFIOUSER", 00:09:51.699 "trtype": "VFIOUSER", 00:09:51.699 "adrfam": "IPv4", 00:09:51.699 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:51.699 "trsvcid": "0" 00:09:51.699 } 00:09:51.699 ], 00:09:51.699 "allow_any_host": true, 00:09:51.699 "hosts": [], 00:09:51.699 "serial_number": "SPDK2", 00:09:51.699 "model_number": "SPDK bdev Controller", 00:09:51.699 "max_namespaces": 32, 00:09:51.699 "min_cntlid": 1, 00:09:51.699 "max_cntlid": 65519, 00:09:51.699 "namespaces": [ 00:09:51.699 { 00:09:51.699 "nsid": 1, 00:09:51.699 "bdev_name": "Malloc2", 00:09:51.699 "name": "Malloc2", 00:09:51.699 "nguid": "D26DC24DE9BE46AB9CF92261AA2B6E81", 00:09:51.699 "uuid": "d26dc24d-e9be-46ab-9cf9-2261aa2b6e81" 00:09:51.699 } 00:09:51.699 ] 00:09:51.699 } 00:09:51.699 ] 00:09:51.699 13:38:54 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:09:51.699 13:38:54 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2553540 00:09:51.699 13:38:54 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:09:51.699 13:38:54 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:09:51.699 13:38:54 -- common/autotest_common.sh@1251 -- # local i=0 00:09:51.699 13:38:54 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:51.699 13:38:54 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:51.699 13:38:54 -- common/autotest_common.sh@1262 -- # return 0 00:09:51.699 13:38:54 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:09:51.699 13:38:54 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:09:51.699 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.961 [2024-04-18 13:38:54.594708] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:51.961 Malloc3 00:09:51.961 13:38:54 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:09:52.221 [2024-04-18 13:38:54.955406] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:52.221 13:38:54 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:52.221 Asynchronous Event Request test 00:09:52.221 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:52.221 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:52.221 Registering asynchronous event callbacks... 00:09:52.221 Starting namespace attribute notice tests for all controllers... 00:09:52.221 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:09:52.221 aer_cb - Changed Namespace 00:09:52.221 Cleaning up... 00:09:52.478 [ 00:09:52.478 { 00:09:52.478 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:52.478 "subtype": "Discovery", 00:09:52.478 "listen_addresses": [], 00:09:52.478 "allow_any_host": true, 00:09:52.478 "hosts": [] 00:09:52.478 }, 00:09:52.478 { 00:09:52.478 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:52.478 "subtype": "NVMe", 00:09:52.478 "listen_addresses": [ 00:09:52.478 { 00:09:52.478 "transport": "VFIOUSER", 00:09:52.478 "trtype": "VFIOUSER", 00:09:52.478 "adrfam": "IPv4", 00:09:52.478 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:52.478 "trsvcid": "0" 00:09:52.478 } 00:09:52.478 ], 00:09:52.478 "allow_any_host": true, 00:09:52.478 "hosts": [], 00:09:52.478 "serial_number": "SPDK1", 00:09:52.478 "model_number": "SPDK bdev Controller", 00:09:52.478 "max_namespaces": 32, 00:09:52.478 "min_cntlid": 1, 00:09:52.478 "max_cntlid": 65519, 00:09:52.478 "namespaces": [ 00:09:52.478 { 00:09:52.478 "nsid": 1, 00:09:52.478 "bdev_name": "Malloc1", 00:09:52.478 "name": "Malloc1", 00:09:52.478 "nguid": "CDA5039D2F114F738B33D2D7ED1681E7", 00:09:52.478 "uuid": "cda5039d-2f11-4f73-8b33-d2d7ed1681e7" 00:09:52.478 }, 00:09:52.478 { 00:09:52.478 "nsid": 2, 00:09:52.478 "bdev_name": "Malloc3", 00:09:52.478 "name": "Malloc3", 00:09:52.478 "nguid": "67B0AF5BA63B4E45BA71986592A7639F", 00:09:52.478 "uuid": "67b0af5b-a63b-4e45-ba71-986592a7639f" 00:09:52.478 } 00:09:52.478 ] 00:09:52.478 }, 00:09:52.478 { 00:09:52.478 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:52.478 "subtype": "NVMe", 00:09:52.478 "listen_addresses": [ 00:09:52.478 { 00:09:52.478 "transport": "VFIOUSER", 00:09:52.478 "trtype": "VFIOUSER", 00:09:52.478 "adrfam": "IPv4", 00:09:52.478 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:52.478 "trsvcid": "0" 00:09:52.478 } 00:09:52.478 ], 00:09:52.478 "allow_any_host": true, 00:09:52.478 "hosts": [], 00:09:52.478 "serial_number": "SPDK2", 00:09:52.478 "model_number": "SPDK bdev Controller", 00:09:52.478 "max_namespaces": 32, 00:09:52.478 "min_cntlid": 1, 00:09:52.478 "max_cntlid": 65519, 00:09:52.478 "namespaces": [ 00:09:52.478 { 00:09:52.478 "nsid": 1, 00:09:52.478 "bdev_name": "Malloc2", 00:09:52.478 "name": "Malloc2", 00:09:52.478 "nguid": "D26DC24DE9BE46AB9CF92261AA2B6E81", 00:09:52.479 "uuid": "d26dc24d-e9be-46ab-9cf9-2261aa2b6e81" 00:09:52.479 } 00:09:52.479 ] 00:09:52.479 } 00:09:52.479 ] 00:09:52.479 13:38:55 -- target/nvmf_vfio_user.sh@44 -- # wait 2553540 00:09:52.479 13:38:55 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:52.479 13:38:55 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:09:52.479 13:38:55 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:09:52.479 13:38:55 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:52.479 [2024-04-18 13:38:55.227704] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:09:52.479 [2024-04-18 13:38:55.227745] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2553567 ] 00:09:52.479 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.479 [2024-04-18 13:38:55.261312] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:09:52.479 [2024-04-18 13:38:55.266667] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:52.479 [2024-04-18 13:38:55.266697] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6e1f544000 00:09:52.479 [2024-04-18 13:38:55.267662] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.479 [2024-04-18 13:38:55.268664] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.479 [2024-04-18 13:38:55.269689] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.479 [2024-04-18 13:38:55.270696] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:52.479 [2024-04-18 13:38:55.271706] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:52.479 [2024-04-18 13:38:55.272707] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.479 [2024-04-18 13:38:55.273718] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:52.479 [2024-04-18 13:38:55.274723] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.479 [2024-04-18 13:38:55.275734] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:52.479 [2024-04-18 13:38:55.275759] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6e1f539000 00:09:52.479 [2024-04-18 13:38:55.276874] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:52.738 [2024-04-18 13:38:55.291780] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:09:52.738 [2024-04-18 13:38:55.291815] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:09:52.738 [2024-04-18 13:38:55.293919] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:52.738 [2024-04-18 13:38:55.293971] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:52.738 [2024-04-18 13:38:55.294058] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:09:52.738 [2024-04-18 13:38:55.294083] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:09:52.738 [2024-04-18 13:38:55.294093] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:09:52.738 [2024-04-18 13:38:55.294924] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:09:52.738 [2024-04-18 13:38:55.294945] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:09:52.738 [2024-04-18 13:38:55.294957] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:09:52.738 [2024-04-18 13:38:55.295931] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:52.738 [2024-04-18 13:38:55.295952] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:09:52.738 [2024-04-18 13:38:55.295967] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:09:52.738 [2024-04-18 13:38:55.296941] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:09:52.738 [2024-04-18 13:38:55.296961] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:52.738 [2024-04-18 13:38:55.297950] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:09:52.738 [2024-04-18 13:38:55.297970] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:09:52.738 [2024-04-18 13:38:55.297980] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:09:52.738 [2024-04-18 13:38:55.297992] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:52.738 [2024-04-18 13:38:55.298108] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:09:52.738 [2024-04-18 13:38:55.298117] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:52.738 [2024-04-18 13:38:55.298126] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:09:52.738 [2024-04-18 13:38:55.298960] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:09:52.738 [2024-04-18 13:38:55.299969] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:09:52.738 [2024-04-18 13:38:55.300972] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:52.738 [2024-04-18 13:38:55.301966] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:52.738 [2024-04-18 13:38:55.302033] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:52.738 [2024-04-18 13:38:55.302989] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:09:52.738 [2024-04-18 13:38:55.303009] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:52.738 [2024-04-18 13:38:55.303018] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:09:52.738 [2024-04-18 13:38:55.303042] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:09:52.738 [2024-04-18 13:38:55.303059] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:09:52.738 [2024-04-18 13:38:55.303085] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:52.738 [2024-04-18 13:38:55.303095] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.738 [2024-04-18 13:38:55.303116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.738 [2024-04-18 13:38:55.309192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:52.738 [2024-04-18 13:38:55.309216] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:09:52.738 [2024-04-18 13:38:55.309225] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:09:52.738 [2024-04-18 13:38:55.309233] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:09:52.738 [2024-04-18 13:38:55.309241] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:52.738 [2024-04-18 13:38:55.309249] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:09:52.738 [2024-04-18 13:38:55.309258] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:09:52.738 [2024-04-18 13:38:55.309266] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:09:52.738 [2024-04-18 13:38:55.309280] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:09:52.738 [2024-04-18 13:38:55.309301] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:52.738 [2024-04-18 13:38:55.317188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:52.738 [2024-04-18 13:38:55.317218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.738 [2024-04-18 13:38:55.317233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.738 [2024-04-18 13:38:55.317245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.738 [2024-04-18 13:38:55.317258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.738 [2024-04-18 13:38:55.317267] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:09:52.738 [2024-04-18 13:38:55.317283] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:52.738 [2024-04-18 13:38:55.317298] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:52.738 [2024-04-18 13:38:55.325189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:52.738 [2024-04-18 13:38:55.325208] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:09:52.738 [2024-04-18 13:38:55.325218] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:52.738 [2024-04-18 13:38:55.325235] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:09:52.738 [2024-04-18 13:38:55.325247] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:09:52.738 [2024-04-18 13:38:55.325262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:52.738 [2024-04-18 13:38:55.333189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:52.738 [2024-04-18 13:38:55.333265] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:09:52.738 [2024-04-18 13:38:55.333281] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:09:52.738 [2024-04-18 13:38:55.333295] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:52.738 [2024-04-18 13:38:55.333304] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:52.738 [2024-04-18 13:38:55.333314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:52.739 [2024-04-18 13:38:55.341189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:52.739 [2024-04-18 13:38:55.341212] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:09:52.739 [2024-04-18 13:38:55.341244] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:09:52.739 [2024-04-18 13:38:55.341268] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:09:52.739 [2024-04-18 13:38:55.341285] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:52.739 [2024-04-18 13:38:55.341294] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.739 [2024-04-18 13:38:55.341304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.739 [2024-04-18 13:38:55.349189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:52.739 [2024-04-18 13:38:55.349218] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:52.739 [2024-04-18 13:38:55.349251] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:52.739 [2024-04-18 13:38:55.349265] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:52.739 [2024-04-18 13:38:55.349273] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.739 [2024-04-18 13:38:55.349283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.739 [2024-04-18 13:38:55.357204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:52.739 [2024-04-18 13:38:55.357233] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:52.739 [2024-04-18 13:38:55.357246] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:09:52.739 [2024-04-18 13:38:55.357261] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:09:52.739 [2024-04-18 13:38:55.357271] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:52.739 [2024-04-18 13:38:55.357280] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:09:52.739 [2024-04-18 13:38:55.357289] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:09:52.739 [2024-04-18 13:38:55.357297] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:09:52.739 [2024-04-18 13:38:55.357305] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:09:52.739 [2024-04-18 13:38:55.357330] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:52.739 [2024-04-18 13:38:55.365203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:52.739 [2024-04-18 13:38:55.365229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:52.739 [2024-04-18 13:38:55.373191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:52.739 [2024-04-18 13:38:55.373231] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:52.739 [2024-04-18 13:38:55.381189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:52.739 [2024-04-18 13:38:55.381214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:52.739 [2024-04-18 13:38:55.389200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:52.739 [2024-04-18 13:38:55.389233] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:52.739 [2024-04-18 13:38:55.389244] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:52.739 [2024-04-18 13:38:55.389251] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:52.739 [2024-04-18 13:38:55.389257] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:52.739 [2024-04-18 13:38:55.389267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:52.739 [2024-04-18 13:38:55.389279] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:52.739 [2024-04-18 13:38:55.389288] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:52.739 [2024-04-18 13:38:55.389297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:52.739 [2024-04-18 13:38:55.389307] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:52.739 [2024-04-18 13:38:55.389315] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.739 [2024-04-18 13:38:55.389324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.739 [2024-04-18 13:38:55.389336] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:52.739 [2024-04-18 13:38:55.389344] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:52.739 [2024-04-18 13:38:55.389353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:52.739 [2024-04-18 13:38:55.397189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:52.739 [2024-04-18 13:38:55.397217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:52.739 [2024-04-18 13:38:55.397234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:52.739 [2024-04-18 13:38:55.397246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:52.739 ===================================================== 00:09:52.739 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:52.739 ===================================================== 00:09:52.739 Controller Capabilities/Features 00:09:52.739 ================================ 00:09:52.739 Vendor ID: 4e58 00:09:52.739 Subsystem Vendor ID: 4e58 00:09:52.739 Serial Number: SPDK2 00:09:52.739 Model Number: SPDK bdev Controller 00:09:52.739 Firmware Version: 24.05 00:09:52.739 Recommended Arb Burst: 6 00:09:52.739 IEEE OUI Identifier: 8d 6b 50 00:09:52.739 Multi-path I/O 00:09:52.739 May have multiple subsystem ports: Yes 00:09:52.739 May have multiple controllers: Yes 00:09:52.739 Associated with SR-IOV VF: No 00:09:52.739 Max Data Transfer Size: 131072 00:09:52.739 Max Number of Namespaces: 32 00:09:52.739 Max Number of I/O Queues: 127 00:09:52.739 NVMe Specification Version (VS): 1.3 00:09:52.739 NVMe Specification Version (Identify): 1.3 00:09:52.739 Maximum Queue Entries: 256 00:09:52.739 Contiguous Queues Required: Yes 00:09:52.739 Arbitration Mechanisms Supported 00:09:52.739 Weighted Round Robin: Not Supported 00:09:52.739 Vendor Specific: Not Supported 00:09:52.739 Reset Timeout: 15000 ms 00:09:52.739 Doorbell Stride: 4 bytes 00:09:52.739 NVM Subsystem Reset: Not Supported 00:09:52.739 Command Sets Supported 00:09:52.739 NVM Command Set: Supported 00:09:52.739 Boot Partition: Not Supported 00:09:52.739 Memory Page Size Minimum: 4096 bytes 00:09:52.739 Memory Page Size Maximum: 4096 bytes 00:09:52.739 Persistent Memory Region: Not Supported 00:09:52.739 Optional Asynchronous Events Supported 00:09:52.739 Namespace Attribute Notices: Supported 00:09:52.739 Firmware Activation Notices: Not Supported 00:09:52.739 ANA Change Notices: Not Supported 00:09:52.739 PLE Aggregate Log Change Notices: Not Supported 00:09:52.739 LBA Status Info Alert Notices: Not Supported 00:09:52.739 EGE Aggregate Log Change Notices: Not Supported 00:09:52.739 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.739 Zone Descriptor Change Notices: Not Supported 00:09:52.739 Discovery Log Change Notices: Not Supported 00:09:52.739 Controller Attributes 00:09:52.739 128-bit Host Identifier: Supported 00:09:52.739 Non-Operational Permissive Mode: Not Supported 00:09:52.739 NVM Sets: Not Supported 00:09:52.739 Read Recovery Levels: Not Supported 00:09:52.739 Endurance Groups: Not Supported 00:09:52.739 Predictable Latency Mode: Not Supported 00:09:52.739 Traffic Based Keep ALive: Not Supported 00:09:52.739 Namespace Granularity: Not Supported 00:09:52.739 SQ Associations: Not Supported 00:09:52.739 UUID List: Not Supported 00:09:52.739 Multi-Domain Subsystem: Not Supported 00:09:52.739 Fixed Capacity Management: Not Supported 00:09:52.739 Variable Capacity Management: Not Supported 00:09:52.739 Delete Endurance Group: Not Supported 00:09:52.739 Delete NVM Set: Not Supported 00:09:52.739 Extended LBA Formats Supported: Not Supported 00:09:52.739 Flexible Data Placement Supported: Not Supported 00:09:52.739 00:09:52.739 Controller Memory Buffer Support 00:09:52.739 ================================ 00:09:52.739 Supported: No 00:09:52.739 00:09:52.739 Persistent Memory Region Support 00:09:52.739 ================================ 00:09:52.739 Supported: No 00:09:52.739 00:09:52.739 Admin Command Set Attributes 00:09:52.739 ============================ 00:09:52.739 Security Send/Receive: Not Supported 00:09:52.739 Format NVM: Not Supported 00:09:52.739 Firmware Activate/Download: Not Supported 00:09:52.739 Namespace Management: Not Supported 00:09:52.739 Device Self-Test: Not Supported 00:09:52.739 Directives: Not Supported 00:09:52.739 NVMe-MI: Not Supported 00:09:52.739 Virtualization Management: Not Supported 00:09:52.740 Doorbell Buffer Config: Not Supported 00:09:52.740 Get LBA Status Capability: Not Supported 00:09:52.740 Command & Feature Lockdown Capability: Not Supported 00:09:52.740 Abort Command Limit: 4 00:09:52.740 Async Event Request Limit: 4 00:09:52.740 Number of Firmware Slots: N/A 00:09:52.740 Firmware Slot 1 Read-Only: N/A 00:09:52.740 Firmware Activation Without Reset: N/A 00:09:52.740 Multiple Update Detection Support: N/A 00:09:52.740 Firmware Update Granularity: No Information Provided 00:09:52.740 Per-Namespace SMART Log: No 00:09:52.740 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.740 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:09:52.740 Command Effects Log Page: Supported 00:09:52.740 Get Log Page Extended Data: Supported 00:09:52.740 Telemetry Log Pages: Not Supported 00:09:52.740 Persistent Event Log Pages: Not Supported 00:09:52.740 Supported Log Pages Log Page: May Support 00:09:52.740 Commands Supported & Effects Log Page: Not Supported 00:09:52.740 Feature Identifiers & Effects Log Page:May Support 00:09:52.740 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.740 Data Area 4 for Telemetry Log: Not Supported 00:09:52.740 Error Log Page Entries Supported: 128 00:09:52.740 Keep Alive: Supported 00:09:52.740 Keep Alive Granularity: 10000 ms 00:09:52.740 00:09:52.740 NVM Command Set Attributes 00:09:52.740 ========================== 00:09:52.740 Submission Queue Entry Size 00:09:52.740 Max: 64 00:09:52.740 Min: 64 00:09:52.740 Completion Queue Entry Size 00:09:52.740 Max: 16 00:09:52.740 Min: 16 00:09:52.740 Number of Namespaces: 32 00:09:52.740 Compare Command: Supported 00:09:52.740 Write Uncorrectable Command: Not Supported 00:09:52.740 Dataset Management Command: Supported 00:09:52.740 Write Zeroes Command: Supported 00:09:52.740 Set Features Save Field: Not Supported 00:09:52.740 Reservations: Not Supported 00:09:52.740 Timestamp: Not Supported 00:09:52.740 Copy: Supported 00:09:52.740 Volatile Write Cache: Present 00:09:52.740 Atomic Write Unit (Normal): 1 00:09:52.740 Atomic Write Unit (PFail): 1 00:09:52.740 Atomic Compare & Write Unit: 1 00:09:52.740 Fused Compare & Write: Supported 00:09:52.740 Scatter-Gather List 00:09:52.740 SGL Command Set: Supported (Dword aligned) 00:09:52.740 SGL Keyed: Not Supported 00:09:52.740 SGL Bit Bucket Descriptor: Not Supported 00:09:52.740 SGL Metadata Pointer: Not Supported 00:09:52.740 Oversized SGL: Not Supported 00:09:52.740 SGL Metadata Address: Not Supported 00:09:52.740 SGL Offset: Not Supported 00:09:52.740 Transport SGL Data Block: Not Supported 00:09:52.740 Replay Protected Memory Block: Not Supported 00:09:52.740 00:09:52.740 Firmware Slot Information 00:09:52.740 ========================= 00:09:52.740 Active slot: 1 00:09:52.740 Slot 1 Firmware Revision: 24.05 00:09:52.740 00:09:52.740 00:09:52.740 Commands Supported and Effects 00:09:52.740 ============================== 00:09:52.740 Admin Commands 00:09:52.740 -------------- 00:09:52.740 Get Log Page (02h): Supported 00:09:52.740 Identify (06h): Supported 00:09:52.740 Abort (08h): Supported 00:09:52.740 Set Features (09h): Supported 00:09:52.740 Get Features (0Ah): Supported 00:09:52.740 Asynchronous Event Request (0Ch): Supported 00:09:52.740 Keep Alive (18h): Supported 00:09:52.740 I/O Commands 00:09:52.740 ------------ 00:09:52.740 Flush (00h): Supported LBA-Change 00:09:52.740 Write (01h): Supported LBA-Change 00:09:52.740 Read (02h): Supported 00:09:52.740 Compare (05h): Supported 00:09:52.740 Write Zeroes (08h): Supported LBA-Change 00:09:52.740 Dataset Management (09h): Supported LBA-Change 00:09:52.740 Copy (19h): Supported LBA-Change 00:09:52.740 Unknown (79h): Supported LBA-Change 00:09:52.740 Unknown (7Ah): Supported 00:09:52.740 00:09:52.740 Error Log 00:09:52.740 ========= 00:09:52.740 00:09:52.740 Arbitration 00:09:52.740 =========== 00:09:52.740 Arbitration Burst: 1 00:09:52.740 00:09:52.740 Power Management 00:09:52.740 ================ 00:09:52.740 Number of Power States: 1 00:09:52.740 Current Power State: Power State #0 00:09:52.740 Power State #0: 00:09:52.740 Max Power: 0.00 W 00:09:52.740 Non-Operational State: Operational 00:09:52.740 Entry Latency: Not Reported 00:09:52.740 Exit Latency: Not Reported 00:09:52.740 Relative Read Throughput: 0 00:09:52.740 Relative Read Latency: 0 00:09:52.740 Relative Write Throughput: 0 00:09:52.740 Relative Write Latency: 0 00:09:52.740 Idle Power: Not Reported 00:09:52.740 Active Power: Not Reported 00:09:52.740 Non-Operational Permissive Mode: Not Supported 00:09:52.740 00:09:52.740 Health Information 00:09:52.740 ================== 00:09:52.740 Critical Warnings: 00:09:52.740 Available Spare Space: OK 00:09:52.740 Temperature: OK 00:09:52.740 Device Reliability: OK 00:09:52.740 Read Only: No 00:09:52.740 Volatile Memory Backup: OK 00:09:52.740 Current Temperature: 0 Kelvin (-2[2024-04-18 13:38:55.397376] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:52.740 [2024-04-18 13:38:55.405203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:52.740 [2024-04-18 13:38:55.405250] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:09:52.740 [2024-04-18 13:38:55.405267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.740 [2024-04-18 13:38:55.405278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.740 [2024-04-18 13:38:55.405289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.740 [2024-04-18 13:38:55.405298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.740 [2024-04-18 13:38:55.405380] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:52.740 [2024-04-18 13:38:55.405402] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:09:52.740 [2024-04-18 13:38:55.406383] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:52.740 [2024-04-18 13:38:55.406459] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:09:52.740 [2024-04-18 13:38:55.406485] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:09:52.740 [2024-04-18 13:38:55.407400] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:09:52.740 [2024-04-18 13:38:55.407424] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:09:52.740 [2024-04-18 13:38:55.407492] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:09:52.740 [2024-04-18 13:38:55.410189] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:52.740 73 Celsius) 00:09:52.740 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:52.740 Available Spare: 0% 00:09:52.740 Available Spare Threshold: 0% 00:09:52.740 Life Percentage Used: 0% 00:09:52.740 Data Units Read: 0 00:09:52.740 Data Units Written: 0 00:09:52.740 Host Read Commands: 0 00:09:52.740 Host Write Commands: 0 00:09:52.740 Controller Busy Time: 0 minutes 00:09:52.740 Power Cycles: 0 00:09:52.740 Power On Hours: 0 hours 00:09:52.740 Unsafe Shutdowns: 0 00:09:52.740 Unrecoverable Media Errors: 0 00:09:52.740 Lifetime Error Log Entries: 0 00:09:52.740 Warning Temperature Time: 0 minutes 00:09:52.740 Critical Temperature Time: 0 minutes 00:09:52.740 00:09:52.740 Number of Queues 00:09:52.740 ================ 00:09:52.740 Number of I/O Submission Queues: 127 00:09:52.740 Number of I/O Completion Queues: 127 00:09:52.740 00:09:52.740 Active Namespaces 00:09:52.740 ================= 00:09:52.740 Namespace ID:1 00:09:52.740 Error Recovery Timeout: Unlimited 00:09:52.740 Command Set Identifier: NVM (00h) 00:09:52.740 Deallocate: Supported 00:09:52.740 Deallocated/Unwritten Error: Not Supported 00:09:52.740 Deallocated Read Value: Unknown 00:09:52.740 Deallocate in Write Zeroes: Not Supported 00:09:52.740 Deallocated Guard Field: 0xFFFF 00:09:52.740 Flush: Supported 00:09:52.740 Reservation: Supported 00:09:52.740 Namespace Sharing Capabilities: Multiple Controllers 00:09:52.740 Size (in LBAs): 131072 (0GiB) 00:09:52.740 Capacity (in LBAs): 131072 (0GiB) 00:09:52.740 Utilization (in LBAs): 131072 (0GiB) 00:09:52.740 NGUID: D26DC24DE9BE46AB9CF92261AA2B6E81 00:09:52.740 UUID: d26dc24d-e9be-46ab-9cf9-2261aa2b6e81 00:09:52.740 Thin Provisioning: Not Supported 00:09:52.740 Per-NS Atomic Units: Yes 00:09:52.740 Atomic Boundary Size (Normal): 0 00:09:52.740 Atomic Boundary Size (PFail): 0 00:09:52.740 Atomic Boundary Offset: 0 00:09:52.740 Maximum Single Source Range Length: 65535 00:09:52.741 Maximum Copy Length: 65535 00:09:52.741 Maximum Source Range Count: 1 00:09:52.741 NGUID/EUI64 Never Reused: No 00:09:52.741 Namespace Write Protected: No 00:09:52.741 Number of LBA Formats: 1 00:09:52.741 Current LBA Format: LBA Format #00 00:09:52.741 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.741 00:09:52.741 13:38:55 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:52.741 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.998 [2024-04-18 13:38:55.638007] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:58.259 [2024-04-18 13:39:00.745537] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:58.259 Initializing NVMe Controllers 00:09:58.259 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:58.259 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:09:58.259 Initialization complete. Launching workers. 00:09:58.259 ======================================================== 00:09:58.259 Latency(us) 00:09:58.259 Device Information : IOPS MiB/s Average min max 00:09:58.259 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33011.39 128.95 3878.94 1203.60 7601.45 00:09:58.259 ======================================================== 00:09:58.259 Total : 33011.39 128.95 3878.94 1203.60 7601.45 00:09:58.259 00:09:58.259 13:39:00 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:58.259 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.259 [2024-04-18 13:39:00.987218] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:03.530 [2024-04-18 13:39:06.007451] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:03.530 Initializing NVMe Controllers 00:10:03.530 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:03.530 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:03.530 Initialization complete. Launching workers. 00:10:03.530 ======================================================== 00:10:03.530 Latency(us) 00:10:03.530 Device Information : IOPS MiB/s Average min max 00:10:03.530 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31973.60 124.90 4002.56 1197.78 9330.12 00:10:03.530 ======================================================== 00:10:03.530 Total : 31973.60 124.90 4002.56 1197.78 9330.12 00:10:03.530 00:10:03.530 13:39:06 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:03.530 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.530 [2024-04-18 13:39:06.221413] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:08.804 [2024-04-18 13:39:11.365316] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:08.804 Initializing NVMe Controllers 00:10:08.804 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:08.804 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:08.804 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:08.804 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:08.804 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:08.804 Initialization complete. Launching workers. 00:10:08.804 Starting thread on core 2 00:10:08.804 Starting thread on core 3 00:10:08.804 Starting thread on core 1 00:10:08.804 13:39:11 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:08.804 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.064 [2024-04-18 13:39:11.680674] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:12.377 [2024-04-18 13:39:14.761404] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:12.377 Initializing NVMe Controllers 00:10:12.377 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:12.377 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:12.377 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:12.377 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:12.377 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:12.377 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:12.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:12.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:12.377 Initialization complete. Launching workers. 00:10:12.377 Starting thread on core 1 with urgent priority queue 00:10:12.377 Starting thread on core 2 with urgent priority queue 00:10:12.377 Starting thread on core 3 with urgent priority queue 00:10:12.377 Starting thread on core 0 with urgent priority queue 00:10:12.377 SPDK bdev Controller (SPDK2 ) core 0: 4510.33 IO/s 22.17 secs/100000 ios 00:10:12.377 SPDK bdev Controller (SPDK2 ) core 1: 4565.00 IO/s 21.91 secs/100000 ios 00:10:12.377 SPDK bdev Controller (SPDK2 ) core 2: 4357.00 IO/s 22.95 secs/100000 ios 00:10:12.377 SPDK bdev Controller (SPDK2 ) core 3: 4550.33 IO/s 21.98 secs/100000 ios 00:10:12.377 ======================================================== 00:10:12.377 00:10:12.377 13:39:14 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:12.377 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.377 [2024-04-18 13:39:15.047769] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:12.377 [2024-04-18 13:39:15.059929] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:12.377 Initializing NVMe Controllers 00:10:12.377 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:12.377 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:12.377 Namespace ID: 1 size: 0GB 00:10:12.377 Initialization complete. 00:10:12.377 INFO: using host memory buffer for IO 00:10:12.377 Hello world! 00:10:12.377 13:39:15 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:12.378 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.637 [2024-04-18 13:39:15.351961] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:14.016 Initializing NVMe Controllers 00:10:14.016 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:14.016 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:14.016 Initialization complete. Launching workers. 00:10:14.016 submit (in ns) avg, min, max = 8058.0, 3500.0, 4016535.6 00:10:14.016 complete (in ns) avg, min, max = 25042.5, 2041.1, 4021004.4 00:10:14.016 00:10:14.016 Submit histogram 00:10:14.016 ================ 00:10:14.016 Range in us Cumulative Count 00:10:14.016 3.484 - 3.508: 0.0223% ( 3) 00:10:14.016 3.508 - 3.532: 0.1188% ( 13) 00:10:14.016 3.532 - 3.556: 2.0341% ( 258) 00:10:14.016 3.556 - 3.579: 8.9087% ( 926) 00:10:14.016 3.579 - 3.603: 21.2027% ( 1656) 00:10:14.016 3.603 - 3.627: 31.3512% ( 1367) 00:10:14.016 3.627 - 3.650: 39.7921% ( 1137) 00:10:14.016 3.650 - 3.674: 46.9859% ( 969) 00:10:14.016 3.674 - 3.698: 53.3482% ( 857) 00:10:14.016 3.698 - 3.721: 59.0572% ( 769) 00:10:14.016 3.721 - 3.745: 61.9376% ( 388) 00:10:14.016 3.745 - 3.769: 64.8627% ( 394) 00:10:14.016 3.769 - 3.793: 67.4981% ( 355) 00:10:14.016 3.793 - 3.816: 70.9651% ( 467) 00:10:14.016 3.816 - 3.840: 75.1596% ( 565) 00:10:14.016 3.840 - 3.864: 79.8144% ( 627) 00:10:14.016 3.864 - 3.887: 83.4001% ( 483) 00:10:14.016 3.887 - 3.911: 85.9169% ( 339) 00:10:14.016 3.911 - 3.935: 87.8842% ( 265) 00:10:14.016 3.935 - 3.959: 89.6214% ( 234) 00:10:14.016 3.959 - 3.982: 90.8018% ( 159) 00:10:14.016 3.982 - 4.006: 91.7892% ( 133) 00:10:14.016 4.006 - 4.030: 92.5316% ( 100) 00:10:14.016 4.030 - 4.053: 93.3630% ( 112) 00:10:14.016 4.053 - 4.077: 94.2316% ( 117) 00:10:14.016 4.077 - 4.101: 95.0186% ( 106) 00:10:14.016 4.101 - 4.124: 95.5011% ( 65) 00:10:14.016 4.124 - 4.148: 95.8723% ( 50) 00:10:14.016 4.148 - 4.172: 96.1321% ( 35) 00:10:14.016 4.172 - 4.196: 96.4143% ( 38) 00:10:14.016 4.196 - 4.219: 96.6741% ( 35) 00:10:14.016 4.219 - 4.243: 96.7632% ( 12) 00:10:14.016 4.243 - 4.267: 96.8300% ( 9) 00:10:14.016 4.267 - 4.290: 96.9636% ( 18) 00:10:14.016 4.290 - 4.314: 97.0750% ( 15) 00:10:14.016 4.314 - 4.338: 97.1789% ( 14) 00:10:14.016 4.338 - 4.361: 97.2680% ( 12) 00:10:14.016 4.361 - 4.385: 97.3125% ( 6) 00:10:14.016 4.385 - 4.409: 97.3571% ( 6) 00:10:14.016 4.409 - 4.433: 97.4239% ( 9) 00:10:14.016 4.433 - 4.456: 97.4610% ( 5) 00:10:14.016 4.456 - 4.480: 97.4759% ( 2) 00:10:14.016 4.480 - 4.504: 97.4907% ( 2) 00:10:14.016 4.504 - 4.527: 97.5130% ( 3) 00:10:14.016 4.527 - 4.551: 97.5204% ( 1) 00:10:14.016 4.599 - 4.622: 97.5353% ( 2) 00:10:14.016 4.646 - 4.670: 97.5427% ( 1) 00:10:14.016 4.670 - 4.693: 97.5501% ( 1) 00:10:14.016 4.717 - 4.741: 97.5575% ( 1) 00:10:14.016 4.741 - 4.764: 97.5650% ( 1) 00:10:14.016 4.812 - 4.836: 97.5872% ( 3) 00:10:14.016 4.836 - 4.859: 97.6244% ( 5) 00:10:14.016 4.859 - 4.883: 97.6689% ( 6) 00:10:14.016 4.883 - 4.907: 97.7283% ( 8) 00:10:14.016 4.907 - 4.930: 97.7951% ( 9) 00:10:14.016 4.930 - 4.954: 97.8322% ( 5) 00:10:14.016 4.954 - 4.978: 97.8990% ( 9) 00:10:14.016 4.978 - 5.001: 97.9733% ( 10) 00:10:14.016 5.001 - 5.025: 98.0401% ( 9) 00:10:14.016 5.025 - 5.049: 98.0772% ( 5) 00:10:14.016 5.049 - 5.073: 98.0995% ( 3) 00:10:14.016 5.073 - 5.096: 98.1218% ( 3) 00:10:14.016 5.096 - 5.120: 98.1589% ( 5) 00:10:14.016 5.120 - 5.144: 98.1886% ( 4) 00:10:14.016 5.144 - 5.167: 98.2257% ( 5) 00:10:14.016 5.167 - 5.191: 98.2405% ( 2) 00:10:14.016 5.191 - 5.215: 98.2702% ( 4) 00:10:14.016 5.215 - 5.239: 98.2999% ( 4) 00:10:14.016 5.239 - 5.262: 98.3370% ( 5) 00:10:14.016 5.262 - 5.286: 98.3445% ( 1) 00:10:14.016 5.286 - 5.310: 98.3593% ( 2) 00:10:14.016 5.310 - 5.333: 98.3742% ( 2) 00:10:14.016 5.333 - 5.357: 98.3816% ( 1) 00:10:14.016 5.357 - 5.381: 98.3964% ( 2) 00:10:14.016 5.547 - 5.570: 98.4039% ( 1) 00:10:14.016 5.689 - 5.713: 98.4113% ( 1) 00:10:14.016 5.831 - 5.855: 98.4187% ( 1) 00:10:14.016 6.021 - 6.044: 98.4261% ( 1) 00:10:14.016 6.068 - 6.116: 98.4336% ( 1) 00:10:14.016 6.163 - 6.210: 98.4410% ( 1) 00:10:14.016 6.258 - 6.305: 98.4484% ( 1) 00:10:14.016 6.495 - 6.542: 98.4633% ( 2) 00:10:14.016 6.684 - 6.732: 98.4707% ( 1) 00:10:14.016 6.874 - 6.921: 98.4781% ( 1) 00:10:14.016 6.969 - 7.016: 98.4855% ( 1) 00:10:14.016 7.159 - 7.206: 98.4929% ( 1) 00:10:14.016 7.396 - 7.443: 98.5004% ( 1) 00:10:14.016 7.443 - 7.490: 98.5078% ( 1) 00:10:14.016 7.538 - 7.585: 98.5152% ( 1) 00:10:14.016 7.585 - 7.633: 98.5226% ( 1) 00:10:14.016 7.727 - 7.775: 98.5301% ( 1) 00:10:14.016 7.775 - 7.822: 98.5523% ( 3) 00:10:14.016 7.822 - 7.870: 98.5598% ( 1) 00:10:14.016 7.870 - 7.917: 98.5672% ( 1) 00:10:14.016 7.964 - 8.012: 98.5895% ( 3) 00:10:14.016 8.059 - 8.107: 98.5969% ( 1) 00:10:14.016 8.154 - 8.201: 98.6043% ( 1) 00:10:14.016 8.201 - 8.249: 98.6488% ( 6) 00:10:14.016 8.296 - 8.344: 98.6637% ( 2) 00:10:14.016 8.344 - 8.391: 98.6711% ( 1) 00:10:14.016 8.391 - 8.439: 98.6785% ( 1) 00:10:14.016 8.486 - 8.533: 98.6934% ( 2) 00:10:14.016 8.533 - 8.581: 98.7082% ( 2) 00:10:14.016 8.581 - 8.628: 98.7157% ( 1) 00:10:14.016 8.676 - 8.723: 98.7231% ( 1) 00:10:14.016 8.723 - 8.770: 98.7305% ( 1) 00:10:14.016 8.770 - 8.818: 98.7379% ( 1) 00:10:14.016 8.818 - 8.865: 98.7454% ( 1) 00:10:14.016 8.865 - 8.913: 98.7528% ( 1) 00:10:14.016 8.913 - 8.960: 98.7676% ( 2) 00:10:14.016 8.960 - 9.007: 98.7751% ( 1) 00:10:14.016 9.007 - 9.055: 98.7899% ( 2) 00:10:14.016 9.055 - 9.102: 98.7973% ( 1) 00:10:14.016 9.387 - 9.434: 98.8122% ( 2) 00:10:14.016 9.481 - 9.529: 98.8344% ( 3) 00:10:14.017 9.576 - 9.624: 98.8419% ( 1) 00:10:14.017 9.624 - 9.671: 98.8493% ( 1) 00:10:14.017 9.861 - 9.908: 98.8567% ( 1) 00:10:14.017 10.193 - 10.240: 98.8716% ( 2) 00:10:14.017 10.477 - 10.524: 98.8790% ( 1) 00:10:14.017 10.572 - 10.619: 98.8864% ( 1) 00:10:14.017 10.714 - 10.761: 98.8938% ( 1) 00:10:14.017 10.761 - 10.809: 98.9013% ( 1) 00:10:14.017 10.904 - 10.951: 98.9087% ( 1) 00:10:14.017 10.999 - 11.046: 98.9161% ( 1) 00:10:14.017 11.046 - 11.093: 98.9235% ( 1) 00:10:14.017 11.141 - 11.188: 98.9310% ( 1) 00:10:14.017 11.283 - 11.330: 98.9384% ( 1) 00:10:14.017 11.425 - 11.473: 98.9458% ( 1) 00:10:14.017 11.615 - 11.662: 98.9532% ( 1) 00:10:14.017 11.757 - 11.804: 98.9681% ( 2) 00:10:14.017 12.326 - 12.421: 98.9755% ( 1) 00:10:14.017 12.610 - 12.705: 98.9829% ( 1) 00:10:14.017 12.705 - 12.800: 98.9903% ( 1) 00:10:14.017 12.800 - 12.895: 99.0052% ( 2) 00:10:14.017 13.179 - 13.274: 99.0200% ( 2) 00:10:14.017 13.464 - 13.559: 99.0275% ( 1) 00:10:14.017 13.653 - 13.748: 99.0349% ( 1) 00:10:14.017 13.748 - 13.843: 99.0423% ( 1) 00:10:14.017 14.127 - 14.222: 99.0497% ( 1) 00:10:14.017 14.317 - 14.412: 99.0572% ( 1) 00:10:14.017 14.412 - 14.507: 99.0646% ( 1) 00:10:14.017 14.791 - 14.886: 99.0720% ( 1) 00:10:14.017 14.981 - 15.076: 99.0794% ( 1) 00:10:14.017 15.360 - 15.455: 99.0869% ( 1) 00:10:14.017 17.067 - 17.161: 99.0943% ( 1) 00:10:14.017 17.256 - 17.351: 99.1017% ( 1) 00:10:14.017 17.351 - 17.446: 99.1463% ( 6) 00:10:14.017 17.446 - 17.541: 99.1908% ( 6) 00:10:14.017 17.541 - 17.636: 99.2131% ( 3) 00:10:14.017 17.636 - 17.730: 99.2428% ( 4) 00:10:14.017 17.730 - 17.825: 99.2650% ( 3) 00:10:14.017 17.825 - 17.920: 99.2725% ( 1) 00:10:14.017 17.920 - 18.015: 99.3170% ( 6) 00:10:14.017 18.015 - 18.110: 99.3838% ( 9) 00:10:14.017 18.110 - 18.204: 99.4432% ( 8) 00:10:14.017 18.204 - 18.299: 99.5174% ( 10) 00:10:14.017 18.299 - 18.394: 99.6065% ( 12) 00:10:14.017 18.394 - 18.489: 99.6437% ( 5) 00:10:14.017 18.489 - 18.584: 99.7253% ( 11) 00:10:14.017 18.584 - 18.679: 99.7699% ( 6) 00:10:14.017 18.679 - 18.773: 99.7773% ( 1) 00:10:14.017 18.773 - 18.868: 99.8070% ( 4) 00:10:14.017 18.868 - 18.963: 99.8144% ( 1) 00:10:14.017 19.058 - 19.153: 99.8218% ( 1) 00:10:14.017 19.153 - 19.247: 99.8367% ( 2) 00:10:14.017 19.247 - 19.342: 99.8515% ( 2) 00:10:14.017 19.342 - 19.437: 99.8589% ( 1) 00:10:14.017 19.532 - 19.627: 99.8664% ( 1) 00:10:14.017 19.911 - 20.006: 99.8738% ( 1) 00:10:14.017 22.281 - 22.376: 99.8812% ( 1) 00:10:14.017 22.566 - 22.661: 99.8886% ( 1) 00:10:14.017 27.496 - 27.686: 99.8961% ( 1) 00:10:14.017 3980.705 - 4004.978: 99.9777% ( 11) 00:10:14.017 4004.978 - 4029.250: 100.0000% ( 3) 00:10:14.017 00:10:14.017 Complete histogram 00:10:14.017 ================== 00:10:14.017 Range in us Cumulative Count 00:10:14.017 2.039 - 2.050: 0.7498% ( 101) 00:10:14.017 2.050 - 2.062: 9.9258% ( 1236) 00:10:14.017 2.062 - 2.074: 14.3281% ( 593) 00:10:14.017 2.074 - 2.086: 24.9072% ( 1425) 00:10:14.017 2.086 - 2.098: 53.0661% ( 3793) 00:10:14.017 2.098 - 2.110: 60.0965% ( 947) 00:10:14.017 2.110 - 2.121: 63.4150% ( 447) 00:10:14.017 2.121 - 2.133: 66.7335% ( 447) 00:10:14.017 2.133 - 2.145: 67.7506% ( 137) 00:10:14.017 2.145 - 2.157: 72.7097% ( 668) 00:10:14.017 2.157 - 2.169: 80.1411% ( 1001) 00:10:14.017 2.169 - 2.181: 81.9970% ( 250) 00:10:14.017 2.181 - 2.193: 83.1106% ( 150) 00:10:14.017 2.193 - 2.204: 84.7290% ( 218) 00:10:14.017 2.204 - 2.216: 85.7461% ( 137) 00:10:14.017 2.216 - 2.228: 88.0327% ( 308) 00:10:14.017 2.228 - 2.240: 91.9896% ( 533) 00:10:14.017 2.240 - 2.252: 93.5412% ( 209) 00:10:14.017 2.252 - 2.264: 94.0831% ( 73) 00:10:14.017 2.264 - 2.276: 94.4618% ( 51) 00:10:14.017 2.276 - 2.287: 94.7290% ( 36) 00:10:14.017 2.287 - 2.299: 94.9740% ( 33) 00:10:14.017 2.299 - 2.311: 95.3229% ( 47) 00:10:14.017 2.311 - 2.323: 95.7164% ( 53) 00:10:14.017 2.323 - 2.335: 95.8872% ( 23) 00:10:14.017 2.335 - 2.347: 95.9391% ( 7) 00:10:14.017 2.347 - 2.359: 96.1173% ( 24) 00:10:14.017 2.359 - 2.370: 96.2658% ( 20) 00:10:14.017 2.370 - 2.382: 96.5330% ( 36) 00:10:14.017 2.382 - 2.394: 96.8968% ( 49) 00:10:14.017 2.394 - 2.406: 97.2309% ( 45) 00:10:14.017 2.406 - 2.418: 97.4610% ( 31) 00:10:14.017 2.418 - 2.430: 97.6540% ( 26) 00:10:14.017 2.430 - 2.441: 97.8099% ( 21) 00:10:14.017 2.441 - 2.453: 97.9436% ( 18) 00:10:14.017 2.453 - 2.465: 98.0772% ( 18) 00:10:14.017 2.465 - 2.477: 98.1589% ( 11) 00:10:14.017 2.477 - 2.489: 98.2480% ( 12) 00:10:14.017 2.489 - 2.501: 98.2777% ( 4) 00:10:14.017 2.501 - 2.513: 98.3519% ( 10) 00:10:14.017 2.513 - 2.524: 98.3890% ( 5) 00:10:14.017 2.524 - 2.536: 98.4261% ( 5) 00:10:14.017 2.536 - 2.548: 98.4633% ( 5) 00:10:14.017 2.548 - 2.560: 98.4707% ( 1) 00:10:14.017 2.572 - 2.584: 98.4781% ( 1) 00:10:14.017 2.584 - 2.596: 98.4855% ( 1) 00:10:14.017 2.643 - 2.655: 98.4929% ( 1) 00:10:14.017 2.655 - 2.667: 98.5004% ( 1) 00:10:14.017 2.702 - 2.714: 98.5078% ( 1) 00:10:14.017 2.726 - 2.738: 98.5152% ( 1) 00:10:14.017 2.785 - 2.797: 98.5226% ( 1) 00:10:14.017 3.342 - 3.366: 98.5375% ( 2) 00:10:14.017 3.413 - 3.437: 98.5672% ( 4) 00:10:14.017 3.437 - 3.461: 98.5746% ( 1) 00:10:14.017 3.461 - 3.484: 98.5969% ( 3) 00:10:14.017 3.484 - 3.508: 98.6117% ( 2) 00:10:14.017 3.508 - 3.532: 98.6192% ( 1) 00:10:14.017 3.627 - 3.650: 98.6340% ( 2) 00:10:14.017 3.650 - 3.674: 98.6563% ( 3) 00:10:14.017 3.674 - 3.698: 98.6637% ( 1) 00:10:14.017 3.721 - 3.745: 98.6934% ( 4) 00:10:14.017 3.745 - 3.769: 98.7008% ( 1) 00:10:14.017 4.006 - 4.030: 98.7082% ( 1) 00:10:14.017 4.030 - 4.053: 98.7157% ( 1) 00:10:14.017 4.101 - 4.124: 98.7231% ( 1) 00:10:14.017 5.381 - 5.404: 98.7305% ( 1) 00:10:14.017 5.665 - 5.689: 98.7379% ( 1) 00:10:14.017 6.116 - 6.163: 98.7454% ( 1) 00:10:14.017 6.210 - 6.258: 98.7528% ( 1) 00:10:14.017 6.353 - 6.400: 98.7602% ( 1) 00:10:14.017 6.495 - 6.542: 98.7676% ( 1) 00:10:14.017 6.779 - 6.827: 98.7751% ( 1) 00:10:14.017 6.827 - 6.874: 98.7825% ( 1) 00:10:14.017 6.921 - 6.969: 98.7899% ( 1) 00:10:14.017 7.016 - 7.064: 98.7973% ( 1) 00:10:14.017 7.111 - 7.159: 98.8048% ( 1) 00:10:14.017 7.538 - 7.585: 9[2024-04-18 13:39:16.452942] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:14.017 8.8122% ( 1) 00:10:14.017 7.822 - 7.870: 98.8196% ( 1) 00:10:14.017 8.201 - 8.249: 98.8270% ( 1) 00:10:14.017 8.581 - 8.628: 98.8344% ( 1) 00:10:14.017 8.676 - 8.723: 98.8419% ( 1) 00:10:14.017 15.455 - 15.550: 98.8493% ( 1) 00:10:14.017 15.550 - 15.644: 98.8567% ( 1) 00:10:14.017 15.644 - 15.739: 98.8641% ( 1) 00:10:14.017 15.739 - 15.834: 98.8716% ( 1) 00:10:14.017 15.834 - 15.929: 98.8938% ( 3) 00:10:14.017 15.929 - 16.024: 98.9013% ( 1) 00:10:14.017 16.024 - 16.119: 98.9458% ( 6) 00:10:14.017 16.119 - 16.213: 98.9755% ( 4) 00:10:14.017 16.213 - 16.308: 99.0200% ( 6) 00:10:14.017 16.308 - 16.403: 99.0572% ( 5) 00:10:14.017 16.403 - 16.498: 99.0869% ( 4) 00:10:14.017 16.498 - 16.593: 99.1314% ( 6) 00:10:14.017 16.593 - 16.687: 99.1834% ( 7) 00:10:14.017 16.687 - 16.782: 99.1908% ( 1) 00:10:14.017 16.782 - 16.877: 99.2502% ( 8) 00:10:14.017 16.877 - 16.972: 99.2725% ( 3) 00:10:14.017 16.972 - 17.067: 99.2873% ( 2) 00:10:14.017 17.067 - 17.161: 99.2947% ( 1) 00:10:14.017 17.161 - 17.256: 99.3170% ( 3) 00:10:14.017 17.256 - 17.351: 99.3393% ( 3) 00:10:14.017 17.351 - 17.446: 99.3541% ( 2) 00:10:14.017 17.541 - 17.636: 99.3764% ( 3) 00:10:14.017 17.636 - 17.730: 99.3912% ( 2) 00:10:14.017 18.015 - 18.110: 99.4061% ( 2) 00:10:14.017 18.299 - 18.394: 99.4135% ( 1) 00:10:14.017 18.584 - 18.679: 99.4209% ( 1) 00:10:14.017 29.772 - 29.961: 99.4284% ( 1) 00:10:14.017 3155.437 - 3179.710: 99.4358% ( 1) 00:10:14.017 3980.705 - 4004.978: 99.8367% ( 54) 00:10:14.017 4004.978 - 4029.250: 100.0000% ( 22) 00:10:14.017 00:10:14.017 13:39:16 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:14.017 13:39:16 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:14.017 13:39:16 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:14.017 13:39:16 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:14.017 13:39:16 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:14.017 [ 00:10:14.017 { 00:10:14.017 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:14.017 "subtype": "Discovery", 00:10:14.017 "listen_addresses": [], 00:10:14.017 "allow_any_host": true, 00:10:14.017 "hosts": [] 00:10:14.017 }, 00:10:14.017 { 00:10:14.017 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:14.018 "subtype": "NVMe", 00:10:14.018 "listen_addresses": [ 00:10:14.018 { 00:10:14.018 "transport": "VFIOUSER", 00:10:14.018 "trtype": "VFIOUSER", 00:10:14.018 "adrfam": "IPv4", 00:10:14.018 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:14.018 "trsvcid": "0" 00:10:14.018 } 00:10:14.018 ], 00:10:14.018 "allow_any_host": true, 00:10:14.018 "hosts": [], 00:10:14.018 "serial_number": "SPDK1", 00:10:14.018 "model_number": "SPDK bdev Controller", 00:10:14.018 "max_namespaces": 32, 00:10:14.018 "min_cntlid": 1, 00:10:14.018 "max_cntlid": 65519, 00:10:14.018 "namespaces": [ 00:10:14.018 { 00:10:14.018 "nsid": 1, 00:10:14.018 "bdev_name": "Malloc1", 00:10:14.018 "name": "Malloc1", 00:10:14.018 "nguid": "CDA5039D2F114F738B33D2D7ED1681E7", 00:10:14.018 "uuid": "cda5039d-2f11-4f73-8b33-d2d7ed1681e7" 00:10:14.018 }, 00:10:14.018 { 00:10:14.018 "nsid": 2, 00:10:14.018 "bdev_name": "Malloc3", 00:10:14.018 "name": "Malloc3", 00:10:14.018 "nguid": "67B0AF5BA63B4E45BA71986592A7639F", 00:10:14.018 "uuid": "67b0af5b-a63b-4e45-ba71-986592a7639f" 00:10:14.018 } 00:10:14.018 ] 00:10:14.018 }, 00:10:14.018 { 00:10:14.018 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:14.018 "subtype": "NVMe", 00:10:14.018 "listen_addresses": [ 00:10:14.018 { 00:10:14.018 "transport": "VFIOUSER", 00:10:14.018 "trtype": "VFIOUSER", 00:10:14.018 "adrfam": "IPv4", 00:10:14.018 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:14.018 "trsvcid": "0" 00:10:14.018 } 00:10:14.018 ], 00:10:14.018 "allow_any_host": true, 00:10:14.018 "hosts": [], 00:10:14.018 "serial_number": "SPDK2", 00:10:14.018 "model_number": "SPDK bdev Controller", 00:10:14.018 "max_namespaces": 32, 00:10:14.018 "min_cntlid": 1, 00:10:14.018 "max_cntlid": 65519, 00:10:14.018 "namespaces": [ 00:10:14.018 { 00:10:14.018 "nsid": 1, 00:10:14.018 "bdev_name": "Malloc2", 00:10:14.018 "name": "Malloc2", 00:10:14.018 "nguid": "D26DC24DE9BE46AB9CF92261AA2B6E81", 00:10:14.018 "uuid": "d26dc24d-e9be-46ab-9cf9-2261aa2b6e81" 00:10:14.018 } 00:10:14.018 ] 00:10:14.018 } 00:10:14.018 ] 00:10:14.018 13:39:16 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:14.018 13:39:16 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2556817 00:10:14.018 13:39:16 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:14.018 13:39:16 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:14.018 13:39:16 -- common/autotest_common.sh@1251 -- # local i=0 00:10:14.018 13:39:16 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:14.018 13:39:16 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:14.018 13:39:16 -- common/autotest_common.sh@1262 -- # return 0 00:10:14.018 13:39:16 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:14.018 13:39:16 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:14.276 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.276 [2024-04-18 13:39:16.961644] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:14.276 Malloc4 00:10:14.533 13:39:17 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:14.533 [2024-04-18 13:39:17.307220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:14.533 13:39:17 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:14.790 Asynchronous Event Request test 00:10:14.790 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:14.790 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:14.790 Registering asynchronous event callbacks... 00:10:14.790 Starting namespace attribute notice tests for all controllers... 00:10:14.790 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:14.790 aer_cb - Changed Namespace 00:10:14.790 Cleaning up... 00:10:14.790 [ 00:10:14.790 { 00:10:14.790 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:14.790 "subtype": "Discovery", 00:10:14.791 "listen_addresses": [], 00:10:14.791 "allow_any_host": true, 00:10:14.791 "hosts": [] 00:10:14.791 }, 00:10:14.791 { 00:10:14.791 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:14.791 "subtype": "NVMe", 00:10:14.791 "listen_addresses": [ 00:10:14.791 { 00:10:14.791 "transport": "VFIOUSER", 00:10:14.791 "trtype": "VFIOUSER", 00:10:14.791 "adrfam": "IPv4", 00:10:14.791 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:14.791 "trsvcid": "0" 00:10:14.791 } 00:10:14.791 ], 00:10:14.791 "allow_any_host": true, 00:10:14.791 "hosts": [], 00:10:14.791 "serial_number": "SPDK1", 00:10:14.791 "model_number": "SPDK bdev Controller", 00:10:14.791 "max_namespaces": 32, 00:10:14.791 "min_cntlid": 1, 00:10:14.791 "max_cntlid": 65519, 00:10:14.791 "namespaces": [ 00:10:14.791 { 00:10:14.791 "nsid": 1, 00:10:14.791 "bdev_name": "Malloc1", 00:10:14.791 "name": "Malloc1", 00:10:14.791 "nguid": "CDA5039D2F114F738B33D2D7ED1681E7", 00:10:14.791 "uuid": "cda5039d-2f11-4f73-8b33-d2d7ed1681e7" 00:10:14.791 }, 00:10:14.791 { 00:10:14.791 "nsid": 2, 00:10:14.791 "bdev_name": "Malloc3", 00:10:14.791 "name": "Malloc3", 00:10:14.791 "nguid": "67B0AF5BA63B4E45BA71986592A7639F", 00:10:14.791 "uuid": "67b0af5b-a63b-4e45-ba71-986592a7639f" 00:10:14.791 } 00:10:14.791 ] 00:10:14.791 }, 00:10:14.791 { 00:10:14.791 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:14.791 "subtype": "NVMe", 00:10:14.791 "listen_addresses": [ 00:10:14.791 { 00:10:14.791 "transport": "VFIOUSER", 00:10:14.791 "trtype": "VFIOUSER", 00:10:14.791 "adrfam": "IPv4", 00:10:14.791 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:14.791 "trsvcid": "0" 00:10:14.791 } 00:10:14.791 ], 00:10:14.791 "allow_any_host": true, 00:10:14.791 "hosts": [], 00:10:14.791 "serial_number": "SPDK2", 00:10:14.791 "model_number": "SPDK bdev Controller", 00:10:14.791 "max_namespaces": 32, 00:10:14.791 "min_cntlid": 1, 00:10:14.791 "max_cntlid": 65519, 00:10:14.791 "namespaces": [ 00:10:14.791 { 00:10:14.791 "nsid": 1, 00:10:14.791 "bdev_name": "Malloc2", 00:10:14.791 "name": "Malloc2", 00:10:14.791 "nguid": "D26DC24DE9BE46AB9CF92261AA2B6E81", 00:10:14.791 "uuid": "d26dc24d-e9be-46ab-9cf9-2261aa2b6e81" 00:10:14.791 }, 00:10:14.791 { 00:10:14.791 "nsid": 2, 00:10:14.791 "bdev_name": "Malloc4", 00:10:14.791 "name": "Malloc4", 00:10:14.791 "nguid": "1EF3691FB8B24923A79316EA550071AB", 00:10:14.791 "uuid": "1ef3691f-b8b2-4923-a793-16ea550071ab" 00:10:14.791 } 00:10:14.791 ] 00:10:14.791 } 00:10:14.791 ] 00:10:14.791 13:39:17 -- target/nvmf_vfio_user.sh@44 -- # wait 2556817 00:10:14.791 13:39:17 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:14.791 13:39:17 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2550600 00:10:14.791 13:39:17 -- common/autotest_common.sh@936 -- # '[' -z 2550600 ']' 00:10:14.791 13:39:17 -- common/autotest_common.sh@940 -- # kill -0 2550600 00:10:14.791 13:39:17 -- common/autotest_common.sh@941 -- # uname 00:10:14.791 13:39:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:14.791 13:39:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2550600 00:10:14.791 13:39:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:14.791 13:39:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:14.791 13:39:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2550600' 00:10:14.791 killing process with pid 2550600 00:10:14.791 13:39:17 -- common/autotest_common.sh@955 -- # kill 2550600 00:10:14.791 [2024-04-18 13:39:17.592705] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:10:14.791 13:39:17 -- common/autotest_common.sh@960 -- # wait 2550600 00:10:15.356 13:39:17 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:15.356 13:39:17 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:15.356 13:39:17 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:15.356 13:39:17 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:15.356 13:39:17 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:15.356 13:39:17 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2556967 00:10:15.356 13:39:17 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:15.356 13:39:17 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2556967' 00:10:15.356 Process pid: 2556967 00:10:15.356 13:39:17 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:15.356 13:39:17 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2556967 00:10:15.356 13:39:17 -- common/autotest_common.sh@817 -- # '[' -z 2556967 ']' 00:10:15.356 13:39:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.356 13:39:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:15.356 13:39:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.356 13:39:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:15.356 13:39:17 -- common/autotest_common.sh@10 -- # set +x 00:10:15.356 [2024-04-18 13:39:18.019718] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:15.356 [2024-04-18 13:39:18.020752] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:10:15.356 [2024-04-18 13:39:18.020818] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.356 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.356 [2024-04-18 13:39:18.086036] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.615 [2024-04-18 13:39:18.202461] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.615 [2024-04-18 13:39:18.202524] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.615 [2024-04-18 13:39:18.202549] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.615 [2024-04-18 13:39:18.202562] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.615 [2024-04-18 13:39:18.202575] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.615 [2024-04-18 13:39:18.202656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.615 [2024-04-18 13:39:18.202713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.615 [2024-04-18 13:39:18.202828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.615 [2024-04-18 13:39:18.202832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.615 [2024-04-18 13:39:18.315800] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:10:15.616 [2024-04-18 13:39:18.316056] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:10:15.616 [2024-04-18 13:39:18.316343] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:10:15.616 [2024-04-18 13:39:18.317165] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:15.616 [2024-04-18 13:39:18.317290] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:10:16.181 13:39:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:16.181 13:39:18 -- common/autotest_common.sh@850 -- # return 0 00:10:16.181 13:39:18 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:17.559 13:39:19 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:17.559 13:39:20 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:17.559 13:39:20 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:17.559 13:39:20 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:17.559 13:39:20 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:17.559 13:39:20 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:17.817 Malloc1 00:10:17.817 13:39:20 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:18.074 13:39:20 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:18.332 13:39:20 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:18.590 13:39:21 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:18.590 13:39:21 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:18.590 13:39:21 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:18.849 Malloc2 00:10:18.849 13:39:21 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:19.107 13:39:21 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:19.364 13:39:21 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:19.623 13:39:22 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:19.623 13:39:22 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2556967 00:10:19.623 13:39:22 -- common/autotest_common.sh@936 -- # '[' -z 2556967 ']' 00:10:19.623 13:39:22 -- common/autotest_common.sh@940 -- # kill -0 2556967 00:10:19.623 13:39:22 -- common/autotest_common.sh@941 -- # uname 00:10:19.624 13:39:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:19.624 13:39:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2556967 00:10:19.624 13:39:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:19.624 13:39:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:19.624 13:39:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2556967' 00:10:19.624 killing process with pid 2556967 00:10:19.624 13:39:22 -- common/autotest_common.sh@955 -- # kill 2556967 00:10:19.624 13:39:22 -- common/autotest_common.sh@960 -- # wait 2556967 00:10:19.882 13:39:22 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:19.882 13:39:22 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:19.882 00:10:19.882 real 0m53.279s 00:10:19.882 user 3m30.101s 00:10:19.882 sys 0m4.603s 00:10:19.882 13:39:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:19.882 13:39:22 -- common/autotest_common.sh@10 -- # set +x 00:10:19.882 ************************************ 00:10:19.882 END TEST nvmf_vfio_user 00:10:19.882 ************************************ 00:10:19.882 13:39:22 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:19.882 13:39:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:19.882 13:39:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:19.882 13:39:22 -- common/autotest_common.sh@10 -- # set +x 00:10:20.140 ************************************ 00:10:20.140 START TEST nvmf_vfio_user_nvme_compliance 00:10:20.140 ************************************ 00:10:20.140 13:39:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:20.140 * Looking for test storage... 00:10:20.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:20.140 13:39:22 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.140 13:39:22 -- nvmf/common.sh@7 -- # uname -s 00:10:20.140 13:39:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.140 13:39:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.140 13:39:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.140 13:39:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.140 13:39:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.140 13:39:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.140 13:39:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.140 13:39:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.140 13:39:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.140 13:39:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.141 13:39:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:20.141 13:39:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:20.141 13:39:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.141 13:39:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.141 13:39:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.141 13:39:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.141 13:39:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.141 13:39:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.141 13:39:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.141 13:39:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.141 13:39:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.141 13:39:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.141 13:39:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.141 13:39:22 -- paths/export.sh@5 -- # export PATH 00:10:20.141 13:39:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.141 13:39:22 -- nvmf/common.sh@47 -- # : 0 00:10:20.141 13:39:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:20.141 13:39:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:20.141 13:39:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.141 13:39:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.141 13:39:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.141 13:39:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:20.141 13:39:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:20.141 13:39:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:20.141 13:39:22 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:20.141 13:39:22 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:20.141 13:39:22 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:20.141 13:39:22 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:20.141 13:39:22 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:20.141 13:39:22 -- compliance/compliance.sh@20 -- # nvmfpid=2557582 00:10:20.141 13:39:22 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:20.141 13:39:22 -- compliance/compliance.sh@21 -- # echo 'Process pid: 2557582' 00:10:20.141 Process pid: 2557582 00:10:20.141 13:39:22 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:20.141 13:39:22 -- compliance/compliance.sh@24 -- # waitforlisten 2557582 00:10:20.141 13:39:22 -- common/autotest_common.sh@817 -- # '[' -z 2557582 ']' 00:10:20.141 13:39:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.141 13:39:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:20.141 13:39:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.141 13:39:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:20.141 13:39:22 -- common/autotest_common.sh@10 -- # set +x 00:10:20.141 [2024-04-18 13:39:22.811057] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:10:20.141 [2024-04-18 13:39:22.811145] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.141 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.141 [2024-04-18 13:39:22.868763] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:20.401 [2024-04-18 13:39:22.977631] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.401 [2024-04-18 13:39:22.977695] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.401 [2024-04-18 13:39:22.977723] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.401 [2024-04-18 13:39:22.977737] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.401 [2024-04-18 13:39:22.977749] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.401 [2024-04-18 13:39:22.977831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.401 [2024-04-18 13:39:22.977886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.401 [2024-04-18 13:39:22.977890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.401 13:39:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:20.401 13:39:23 -- common/autotest_common.sh@850 -- # return 0 00:10:20.401 13:39:23 -- compliance/compliance.sh@26 -- # sleep 1 00:10:21.335 13:39:24 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:21.335 13:39:24 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:21.335 13:39:24 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:21.335 13:39:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:21.335 13:39:24 -- common/autotest_common.sh@10 -- # set +x 00:10:21.335 13:39:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:21.335 13:39:24 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:21.335 13:39:24 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:21.335 13:39:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:21.335 13:39:24 -- common/autotest_common.sh@10 -- # set +x 00:10:21.335 malloc0 00:10:21.335 13:39:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:21.335 13:39:24 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:21.335 13:39:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:21.335 13:39:24 -- common/autotest_common.sh@10 -- # set +x 00:10:21.594 13:39:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:21.594 13:39:24 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:21.594 13:39:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:21.594 13:39:24 -- common/autotest_common.sh@10 -- # set +x 00:10:21.594 13:39:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:21.594 13:39:24 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:21.594 13:39:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:21.594 13:39:24 -- common/autotest_common.sh@10 -- # set +x 00:10:21.594 13:39:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:21.594 13:39:24 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:21.594 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.594 00:10:21.594 00:10:21.594 CUnit - A unit testing framework for C - Version 2.1-3 00:10:21.594 http://cunit.sourceforge.net/ 00:10:21.594 00:10:21.594 00:10:21.594 Suite: nvme_compliance 00:10:21.594 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-18 13:39:24.308735] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:21.594 [2024-04-18 13:39:24.310211] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:21.594 [2024-04-18 13:39:24.310236] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:21.594 [2024-04-18 13:39:24.310248] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:21.594 [2024-04-18 13:39:24.311752] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:21.594 passed 00:10:21.594 Test: admin_identify_ctrlr_verify_fused ...[2024-04-18 13:39:24.396344] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:21.594 [2024-04-18 13:39:24.399366] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:21.852 passed 00:10:21.852 Test: admin_identify_ns ...[2024-04-18 13:39:24.484660] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:21.852 [2024-04-18 13:39:24.544214] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:21.852 [2024-04-18 13:39:24.552194] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:21.852 [2024-04-18 13:39:24.573341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:21.852 passed 00:10:21.852 Test: admin_get_features_mandatory_features ...[2024-04-18 13:39:24.659543] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.110 [2024-04-18 13:39:24.662567] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.110 passed 00:10:22.110 Test: admin_get_features_optional_features ...[2024-04-18 13:39:24.745068] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.110 [2024-04-18 13:39:24.748090] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.110 passed 00:10:22.110 Test: admin_set_features_number_of_queues ...[2024-04-18 13:39:24.832241] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.370 [2024-04-18 13:39:24.938281] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.370 passed 00:10:22.370 Test: admin_get_log_page_mandatory_logs ...[2024-04-18 13:39:25.022895] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.370 [2024-04-18 13:39:25.025913] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.370 passed 00:10:22.371 Test: admin_get_log_page_with_lpo ...[2024-04-18 13:39:25.110123] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.629 [2024-04-18 13:39:25.183205] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:22.629 [2024-04-18 13:39:25.196275] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.629 passed 00:10:22.629 Test: fabric_property_get ...[2024-04-18 13:39:25.279438] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.629 [2024-04-18 13:39:25.280721] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:22.629 [2024-04-18 13:39:25.282475] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.629 passed 00:10:22.629 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-18 13:39:25.366020] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.629 [2024-04-18 13:39:25.367334] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:22.629 [2024-04-18 13:39:25.369037] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.629 passed 00:10:22.887 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-18 13:39:25.454040] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.887 [2024-04-18 13:39:25.539190] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:22.887 [2024-04-18 13:39:25.555191] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:22.887 [2024-04-18 13:39:25.560280] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.887 passed 00:10:22.887 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-18 13:39:25.643914] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.887 [2024-04-18 13:39:25.645224] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:22.887 [2024-04-18 13:39:25.646940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.887 passed 00:10:23.146 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-18 13:39:25.726713] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:23.146 [2024-04-18 13:39:25.806188] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:23.146 [2024-04-18 13:39:25.830199] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:23.146 [2024-04-18 13:39:25.835311] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:23.146 passed 00:10:23.146 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-18 13:39:25.915890] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:23.146 [2024-04-18 13:39:25.917206] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:23.146 [2024-04-18 13:39:25.917246] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:23.146 [2024-04-18 13:39:25.920920] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:23.406 passed 00:10:23.406 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-18 13:39:26.004721] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:23.406 [2024-04-18 13:39:26.096188] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:23.406 [2024-04-18 13:39:26.104188] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:23.406 [2024-04-18 13:39:26.112187] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:23.406 [2024-04-18 13:39:26.120186] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:23.406 [2024-04-18 13:39:26.149277] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:23.406 passed 00:10:23.666 Test: admin_create_io_sq_verify_pc ...[2024-04-18 13:39:26.230893] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:23.666 [2024-04-18 13:39:26.251199] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:23.666 [2024-04-18 13:39:26.268515] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:23.666 passed 00:10:23.666 Test: admin_create_io_qp_max_qps ...[2024-04-18 13:39:26.351031] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.044 [2024-04-18 13:39:27.448207] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:25.044 [2024-04-18 13:39:27.831860] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.304 passed 00:10:25.304 Test: admin_create_io_sq_shared_cq ...[2024-04-18 13:39:27.916724] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.304 [2024-04-18 13:39:28.048184] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:25.304 [2024-04-18 13:39:28.085275] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.565 passed 00:10:25.565 00:10:25.565 Run Summary: Type Total Ran Passed Failed Inactive 00:10:25.565 suites 1 1 n/a 0 0 00:10:25.565 tests 18 18 18 0 0 00:10:25.565 asserts 360 360 360 0 n/a 00:10:25.565 00:10:25.565 Elapsed time = 1.564 seconds 00:10:25.565 13:39:28 -- compliance/compliance.sh@42 -- # killprocess 2557582 00:10:25.565 13:39:28 -- common/autotest_common.sh@936 -- # '[' -z 2557582 ']' 00:10:25.565 13:39:28 -- common/autotest_common.sh@940 -- # kill -0 2557582 00:10:25.565 13:39:28 -- common/autotest_common.sh@941 -- # uname 00:10:25.565 13:39:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:25.565 13:39:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2557582 00:10:25.565 13:39:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:25.565 13:39:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:25.565 13:39:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2557582' 00:10:25.565 killing process with pid 2557582 00:10:25.565 13:39:28 -- common/autotest_common.sh@955 -- # kill 2557582 00:10:25.565 13:39:28 -- common/autotest_common.sh@960 -- # wait 2557582 00:10:25.823 13:39:28 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:25.823 13:39:28 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:25.823 00:10:25.823 real 0m5.760s 00:10:25.823 user 0m16.077s 00:10:25.823 sys 0m0.554s 00:10:25.823 13:39:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:25.823 13:39:28 -- common/autotest_common.sh@10 -- # set +x 00:10:25.823 ************************************ 00:10:25.823 END TEST nvmf_vfio_user_nvme_compliance 00:10:25.823 ************************************ 00:10:25.823 13:39:28 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:25.823 13:39:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:25.823 13:39:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:25.823 13:39:28 -- common/autotest_common.sh@10 -- # set +x 00:10:25.823 ************************************ 00:10:25.823 START TEST nvmf_vfio_user_fuzz 00:10:25.823 ************************************ 00:10:25.823 13:39:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:26.083 * Looking for test storage... 00:10:26.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.083 13:39:28 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.083 13:39:28 -- nvmf/common.sh@7 -- # uname -s 00:10:26.083 13:39:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.083 13:39:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.083 13:39:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.083 13:39:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.083 13:39:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.083 13:39:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.083 13:39:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.083 13:39:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.083 13:39:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.083 13:39:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.083 13:39:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:26.083 13:39:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:26.083 13:39:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.083 13:39:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.083 13:39:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.083 13:39:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.083 13:39:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.083 13:39:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.083 13:39:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.083 13:39:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.083 13:39:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.083 13:39:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.084 13:39:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.084 13:39:28 -- paths/export.sh@5 -- # export PATH 00:10:26.084 13:39:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.084 13:39:28 -- nvmf/common.sh@47 -- # : 0 00:10:26.084 13:39:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:26.084 13:39:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:26.084 13:39:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.084 13:39:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.084 13:39:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.084 13:39:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:26.084 13:39:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:26.084 13:39:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:26.084 13:39:28 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:26.084 13:39:28 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:26.084 13:39:28 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:26.084 13:39:28 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:26.084 13:39:28 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:26.084 13:39:28 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:26.084 13:39:28 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:26.084 13:39:28 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2558313 00:10:26.084 13:39:28 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:26.084 13:39:28 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2558313' 00:10:26.084 Process pid: 2558313 00:10:26.084 13:39:28 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:26.084 13:39:28 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2558313 00:10:26.084 13:39:28 -- common/autotest_common.sh@817 -- # '[' -z 2558313 ']' 00:10:26.084 13:39:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.084 13:39:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:26.084 13:39:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.084 13:39:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:26.084 13:39:28 -- common/autotest_common.sh@10 -- # set +x 00:10:26.343 13:39:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:26.343 13:39:29 -- common/autotest_common.sh@850 -- # return 0 00:10:26.343 13:39:29 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:27.302 13:39:30 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:27.302 13:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.302 13:39:30 -- common/autotest_common.sh@10 -- # set +x 00:10:27.302 13:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.302 13:39:30 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:27.302 13:39:30 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:27.302 13:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.302 13:39:30 -- common/autotest_common.sh@10 -- # set +x 00:10:27.302 malloc0 00:10:27.302 13:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.302 13:39:30 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:27.302 13:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.302 13:39:30 -- common/autotest_common.sh@10 -- # set +x 00:10:27.302 13:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.302 13:39:30 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:27.302 13:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.302 13:39:30 -- common/autotest_common.sh@10 -- # set +x 00:10:27.302 13:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.302 13:39:30 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:27.302 13:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.302 13:39:30 -- common/autotest_common.sh@10 -- # set +x 00:10:27.302 13:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.302 13:39:30 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:27.302 13:39:30 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:10:59.424 Fuzzing completed. Shutting down the fuzz application 00:10:59.424 00:10:59.424 Dumping successful admin opcodes: 00:10:59.424 8, 9, 10, 24, 00:10:59.424 Dumping successful io opcodes: 00:10:59.424 0, 00:10:59.424 NS: 0x200003a1ef00 I/O qp, Total commands completed: 574611, total successful commands: 2215, random_seed: 3547379968 00:10:59.424 NS: 0x200003a1ef00 admin qp, Total commands completed: 125532, total successful commands: 1027, random_seed: 3779046016 00:10:59.424 13:40:00 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:10:59.424 13:40:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.424 13:40:00 -- common/autotest_common.sh@10 -- # set +x 00:10:59.424 13:40:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.424 13:40:00 -- target/vfio_user_fuzz.sh@46 -- # killprocess 2558313 00:10:59.424 13:40:00 -- common/autotest_common.sh@936 -- # '[' -z 2558313 ']' 00:10:59.424 13:40:00 -- common/autotest_common.sh@940 -- # kill -0 2558313 00:10:59.424 13:40:00 -- common/autotest_common.sh@941 -- # uname 00:10:59.424 13:40:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:59.424 13:40:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2558313 00:10:59.424 13:40:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:59.424 13:40:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:59.424 13:40:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2558313' 00:10:59.424 killing process with pid 2558313 00:10:59.424 13:40:00 -- common/autotest_common.sh@955 -- # kill 2558313 00:10:59.424 13:40:00 -- common/autotest_common.sh@960 -- # wait 2558313 00:10:59.424 13:40:00 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:10:59.424 13:40:00 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:10:59.424 00:10:59.424 real 0m32.398s 00:10:59.424 user 0m31.309s 00:10:59.424 sys 0m28.654s 00:10:59.424 13:40:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:59.424 13:40:00 -- common/autotest_common.sh@10 -- # set +x 00:10:59.424 ************************************ 00:10:59.424 END TEST nvmf_vfio_user_fuzz 00:10:59.424 ************************************ 00:10:59.424 13:40:01 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:59.424 13:40:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:59.424 13:40:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:59.424 13:40:01 -- common/autotest_common.sh@10 -- # set +x 00:10:59.424 ************************************ 00:10:59.424 START TEST nvmf_host_management 00:10:59.424 ************************************ 00:10:59.424 13:40:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:59.424 * Looking for test storage... 00:10:59.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.424 13:40:01 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.424 13:40:01 -- nvmf/common.sh@7 -- # uname -s 00:10:59.424 13:40:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.424 13:40:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.424 13:40:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.424 13:40:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.424 13:40:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.424 13:40:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.424 13:40:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.424 13:40:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.424 13:40:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.424 13:40:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.424 13:40:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:59.424 13:40:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:59.424 13:40:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.424 13:40:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.424 13:40:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.424 13:40:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.424 13:40:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.424 13:40:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.424 13:40:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.424 13:40:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.424 13:40:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.424 13:40:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.424 13:40:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.424 13:40:01 -- paths/export.sh@5 -- # export PATH 00:10:59.424 13:40:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.424 13:40:01 -- nvmf/common.sh@47 -- # : 0 00:10:59.424 13:40:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.424 13:40:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.424 13:40:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.424 13:40:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.424 13:40:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.424 13:40:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.425 13:40:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.425 13:40:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.425 13:40:01 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:59.425 13:40:01 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:59.425 13:40:01 -- target/host_management.sh@105 -- # nvmftestinit 00:10:59.425 13:40:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:59.425 13:40:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.425 13:40:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:59.425 13:40:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:59.425 13:40:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:59.425 13:40:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.425 13:40:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.425 13:40:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.425 13:40:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:59.425 13:40:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:59.425 13:40:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:59.425 13:40:01 -- common/autotest_common.sh@10 -- # set +x 00:11:00.361 13:40:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:00.361 13:40:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:00.361 13:40:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:00.361 13:40:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:00.361 13:40:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:00.361 13:40:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:00.361 13:40:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:00.361 13:40:03 -- nvmf/common.sh@295 -- # net_devs=() 00:11:00.361 13:40:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:00.361 13:40:03 -- nvmf/common.sh@296 -- # e810=() 00:11:00.361 13:40:03 -- nvmf/common.sh@296 -- # local -ga e810 00:11:00.361 13:40:03 -- nvmf/common.sh@297 -- # x722=() 00:11:00.361 13:40:03 -- nvmf/common.sh@297 -- # local -ga x722 00:11:00.361 13:40:03 -- nvmf/common.sh@298 -- # mlx=() 00:11:00.361 13:40:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:00.361 13:40:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.361 13:40:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.362 13:40:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.362 13:40:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.362 13:40:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.362 13:40:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.362 13:40:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.362 13:40:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.362 13:40:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.362 13:40:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.362 13:40:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.362 13:40:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:00.362 13:40:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:00.362 13:40:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:00.362 13:40:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.362 13:40:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:00.362 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:00.362 13:40:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.362 13:40:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:00.362 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:00.362 13:40:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:00.362 13:40:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.362 13:40:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.362 13:40:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:00.362 13:40:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.362 13:40:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:00.362 Found net devices under 0000:84:00.0: cvl_0_0 00:11:00.362 13:40:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.362 13:40:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.362 13:40:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.362 13:40:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:00.362 13:40:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.362 13:40:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:00.362 Found net devices under 0000:84:00.1: cvl_0_1 00:11:00.362 13:40:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.362 13:40:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:00.362 13:40:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:00.362 13:40:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:00.362 13:40:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:00.362 13:40:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.362 13:40:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.362 13:40:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.362 13:40:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:00.362 13:40:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.362 13:40:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.362 13:40:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:00.362 13:40:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.362 13:40:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.362 13:40:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:00.362 13:40:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:00.362 13:40:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.362 13:40:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.362 13:40:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.362 13:40:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.362 13:40:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:00.362 13:40:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.623 13:40:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.623 13:40:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.623 13:40:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:00.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:11:00.623 00:11:00.623 --- 10.0.0.2 ping statistics --- 00:11:00.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.623 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:11:00.623 13:40:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:11:00.623 00:11:00.623 --- 10.0.0.1 ping statistics --- 00:11:00.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.623 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:11:00.623 13:40:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.623 13:40:03 -- nvmf/common.sh@411 -- # return 0 00:11:00.623 13:40:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:00.623 13:40:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.623 13:40:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:00.623 13:40:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:00.623 13:40:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.623 13:40:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:00.623 13:40:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:00.623 13:40:03 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:11:00.623 13:40:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:00.623 13:40:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:00.623 13:40:03 -- common/autotest_common.sh@10 -- # set +x 00:11:00.623 ************************************ 00:11:00.623 START TEST nvmf_host_management 00:11:00.623 ************************************ 00:11:00.623 13:40:03 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:11:00.623 13:40:03 -- target/host_management.sh@69 -- # starttarget 00:11:00.623 13:40:03 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:00.623 13:40:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:00.623 13:40:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:00.623 13:40:03 -- common/autotest_common.sh@10 -- # set +x 00:11:00.623 13:40:03 -- nvmf/common.sh@470 -- # nvmfpid=2563806 00:11:00.623 13:40:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:00.623 13:40:03 -- nvmf/common.sh@471 -- # waitforlisten 2563806 00:11:00.623 13:40:03 -- common/autotest_common.sh@817 -- # '[' -z 2563806 ']' 00:11:00.623 13:40:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.623 13:40:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:00.623 13:40:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.623 13:40:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:00.623 13:40:03 -- common/autotest_common.sh@10 -- # set +x 00:11:00.623 [2024-04-18 13:40:03.365192] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:00.623 [2024-04-18 13:40:03.365292] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.623 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.882 [2024-04-18 13:40:03.431141] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.882 [2024-04-18 13:40:03.543300] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.882 [2024-04-18 13:40:03.543355] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.882 [2024-04-18 13:40:03.543370] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.882 [2024-04-18 13:40:03.543382] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.882 [2024-04-18 13:40:03.543392] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.882 [2024-04-18 13:40:03.543448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.882 [2024-04-18 13:40:03.543510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.882 [2024-04-18 13:40:03.543575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:00.882 [2024-04-18 13:40:03.543578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.882 13:40:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:00.882 13:40:03 -- common/autotest_common.sh@850 -- # return 0 00:11:00.882 13:40:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:00.882 13:40:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:00.882 13:40:03 -- common/autotest_common.sh@10 -- # set +x 00:11:01.140 13:40:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.140 13:40:03 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.141 13:40:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.141 13:40:03 -- common/autotest_common.sh@10 -- # set +x 00:11:01.141 [2024-04-18 13:40:03.692823] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.141 13:40:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.141 13:40:03 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:01.141 13:40:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:01.141 13:40:03 -- common/autotest_common.sh@10 -- # set +x 00:11:01.141 13:40:03 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:01.141 13:40:03 -- target/host_management.sh@23 -- # cat 00:11:01.141 13:40:03 -- target/host_management.sh@30 -- # rpc_cmd 00:11:01.141 13:40:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.141 13:40:03 -- common/autotest_common.sh@10 -- # set +x 00:11:01.141 Malloc0 00:11:01.141 [2024-04-18 13:40:03.752390] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.141 13:40:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.141 13:40:03 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:01.141 13:40:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:01.141 13:40:03 -- common/autotest_common.sh@10 -- # set +x 00:11:01.141 13:40:03 -- target/host_management.sh@73 -- # perfpid=2563966 00:11:01.141 13:40:03 -- target/host_management.sh@74 -- # waitforlisten 2563966 /var/tmp/bdevperf.sock 00:11:01.141 13:40:03 -- common/autotest_common.sh@817 -- # '[' -z 2563966 ']' 00:11:01.141 13:40:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:01.141 13:40:03 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:01.141 13:40:03 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:01.141 13:40:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:01.141 13:40:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:01.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:01.141 13:40:03 -- nvmf/common.sh@521 -- # config=() 00:11:01.141 13:40:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:01.141 13:40:03 -- nvmf/common.sh@521 -- # local subsystem config 00:11:01.141 13:40:03 -- common/autotest_common.sh@10 -- # set +x 00:11:01.141 13:40:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:01.141 13:40:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:01.141 { 00:11:01.141 "params": { 00:11:01.141 "name": "Nvme$subsystem", 00:11:01.141 "trtype": "$TEST_TRANSPORT", 00:11:01.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:01.141 "adrfam": "ipv4", 00:11:01.141 "trsvcid": "$NVMF_PORT", 00:11:01.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:01.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:01.141 "hdgst": ${hdgst:-false}, 00:11:01.141 "ddgst": ${ddgst:-false} 00:11:01.141 }, 00:11:01.141 "method": "bdev_nvme_attach_controller" 00:11:01.141 } 00:11:01.141 EOF 00:11:01.141 )") 00:11:01.141 13:40:03 -- nvmf/common.sh@543 -- # cat 00:11:01.141 13:40:03 -- nvmf/common.sh@545 -- # jq . 00:11:01.141 13:40:03 -- nvmf/common.sh@546 -- # IFS=, 00:11:01.141 13:40:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:01.141 "params": { 00:11:01.141 "name": "Nvme0", 00:11:01.141 "trtype": "tcp", 00:11:01.141 "traddr": "10.0.0.2", 00:11:01.141 "adrfam": "ipv4", 00:11:01.141 "trsvcid": "4420", 00:11:01.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:01.141 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:01.141 "hdgst": false, 00:11:01.141 "ddgst": false 00:11:01.141 }, 00:11:01.141 "method": "bdev_nvme_attach_controller" 00:11:01.141 }' 00:11:01.141 [2024-04-18 13:40:03.828129] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:01.141 [2024-04-18 13:40:03.828233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2563966 ] 00:11:01.141 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.141 [2024-04-18 13:40:03.889800] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.399 [2024-04-18 13:40:03.998949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.656 Running I/O for 10 seconds... 00:11:01.656 13:40:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:01.656 13:40:04 -- common/autotest_common.sh@850 -- # return 0 00:11:01.656 13:40:04 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:01.656 13:40:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.656 13:40:04 -- common/autotest_common.sh@10 -- # set +x 00:11:01.656 13:40:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.657 13:40:04 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:01.657 13:40:04 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:01.657 13:40:04 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:01.657 13:40:04 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:01.657 13:40:04 -- target/host_management.sh@52 -- # local ret=1 00:11:01.657 13:40:04 -- target/host_management.sh@53 -- # local i 00:11:01.657 13:40:04 -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:01.657 13:40:04 -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:01.657 13:40:04 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:01.657 13:40:04 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:01.657 13:40:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.657 13:40:04 -- common/autotest_common.sh@10 -- # set +x 00:11:01.657 13:40:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.657 13:40:04 -- target/host_management.sh@55 -- # read_io_count=65 00:11:01.657 13:40:04 -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:11:01.657 13:40:04 -- target/host_management.sh@62 -- # sleep 0.25 00:11:01.915 13:40:04 -- target/host_management.sh@54 -- # (( i-- )) 00:11:01.915 13:40:04 -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:01.915 13:40:04 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:01.915 13:40:04 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:01.915 13:40:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.915 13:40:04 -- common/autotest_common.sh@10 -- # set +x 00:11:01.915 13:40:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.915 13:40:04 -- target/host_management.sh@55 -- # read_io_count=449 00:11:01.915 13:40:04 -- target/host_management.sh@58 -- # '[' 449 -ge 100 ']' 00:11:01.915 13:40:04 -- target/host_management.sh@59 -- # ret=0 00:11:01.915 13:40:04 -- target/host_management.sh@60 -- # break 00:11:02.175 13:40:04 -- target/host_management.sh@64 -- # return 0 00:11:02.175 13:40:04 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:02.175 13:40:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.175 13:40:04 -- common/autotest_common.sh@10 -- # set +x 00:11:02.175 [2024-04-18 13:40:04.727591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfe380 is same with the state(5) to be set 00:11:02.175 [2024-04-18 13:40:04.727659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfe380 is same with the state(5) to be set 00:11:02.175 [2024-04-18 13:40:04.727674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfe380 is same with the state(5) to be set 00:11:02.175 [2024-04-18 13:40:04.727687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfe380 is same with the state(5) to be set 00:11:02.175 13:40:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.175 13:40:04 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:02.175 13:40:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.175 13:40:04 -- common/autotest_common.sh@10 -- # set +x 00:11:02.175 13:40:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.175 13:40:04 -- target/host_management.sh@87 -- # sleep 1 00:11:02.175 [2024-04-18 13:40:04.740358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.175 [2024-04-18 13:40:04.740400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.175 [2024-04-18 13:40:04.740419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.175 [2024-04-18 13:40:04.740433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.175 [2024-04-18 13:40:04.740447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.175 [2024-04-18 13:40:04.740460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.175 [2024-04-18 13:40:04.740483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.175 [2024-04-18 13:40:04.740497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.175 [2024-04-18 13:40:04.740510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10007b0 is same with the state(5) to be set 00:11:02.176 [2024-04-18 13:40:04.740582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.740602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.740626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.740642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.740658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.740671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.740686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.740710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.740725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.740738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.740753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.740767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.740783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.740798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.740813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.740827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.740841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.740854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.740869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.740883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.740897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.740911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.740930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.740944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.740959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.740973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.740987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.741000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.741016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.741029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.741045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.741059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.741074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.741088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.741103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.741117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.741132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.741146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.741201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.741217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.741233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.741247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.741263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.741278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.741294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.741308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.176 [2024-04-18 13:40:04.741323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.176 [2024-04-18 13:40:04.741341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.741983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.741997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.742011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.742026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.742039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.742054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.742070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.742085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.742099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.742113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.742127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.742141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.742189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.742207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.742222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.742236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.742252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.742268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.742282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.742297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.742311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.742326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.742339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.742354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.742368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.177 [2024-04-18 13:40:04.742383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.177 [2024-04-18 13:40:04.742396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.178 [2024-04-18 13:40:04.742411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.178 [2024-04-18 13:40:04.742425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.178 [2024-04-18 13:40:04.742441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.178 [2024-04-18 13:40:04.742454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.178 [2024-04-18 13:40:04.742488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.178 [2024-04-18 13:40:04.742509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.178 [2024-04-18 13:40:04.742524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.178 [2024-04-18 13:40:04.742538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.178 [2024-04-18 13:40:04.742552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:02.178 [2024-04-18 13:40:04.742566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.178 [2024-04-18 13:40:04.742655] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14329d0 was disconnected and freed. reset controller. 00:11:02.178 [2024-04-18 13:40:04.743772] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:02.178 task offset: 65536 on job bdev=Nvme0n1 fails 00:11:02.178 00:11:02.178 Latency(us) 00:11:02.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.178 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:02.178 Job: Nvme0n1 ended in about 0.40 seconds with error 00:11:02.178 Verification LBA range: start 0x0 length 0x400 00:11:02.178 Nvme0n1 : 0.40 1270.54 79.41 158.82 0.00 43552.14 2912.71 38447.79 00:11:02.178 =================================================================================================================== 00:11:02.178 Total : 1270.54 79.41 158.82 0.00 43552.14 2912.71 38447.79 00:11:02.178 [2024-04-18 13:40:04.745668] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:02.178 [2024-04-18 13:40:04.745700] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10007b0 (9): Bad file descriptor 00:11:02.178 [2024-04-18 13:40:04.848372] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:03.112 13:40:05 -- target/host_management.sh@91 -- # kill -9 2563966 00:11:03.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2563966) - No such process 00:11:03.112 13:40:05 -- target/host_management.sh@91 -- # true 00:11:03.112 13:40:05 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:03.112 13:40:05 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:03.112 13:40:05 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:03.112 13:40:05 -- nvmf/common.sh@521 -- # config=() 00:11:03.112 13:40:05 -- nvmf/common.sh@521 -- # local subsystem config 00:11:03.112 13:40:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:03.112 13:40:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:03.112 { 00:11:03.112 "params": { 00:11:03.112 "name": "Nvme$subsystem", 00:11:03.112 "trtype": "$TEST_TRANSPORT", 00:11:03.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:03.112 "adrfam": "ipv4", 00:11:03.112 "trsvcid": "$NVMF_PORT", 00:11:03.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:03.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:03.112 "hdgst": ${hdgst:-false}, 00:11:03.112 "ddgst": ${ddgst:-false} 00:11:03.112 }, 00:11:03.112 "method": "bdev_nvme_attach_controller" 00:11:03.112 } 00:11:03.112 EOF 00:11:03.112 )") 00:11:03.112 13:40:05 -- nvmf/common.sh@543 -- # cat 00:11:03.112 13:40:05 -- nvmf/common.sh@545 -- # jq . 00:11:03.112 13:40:05 -- nvmf/common.sh@546 -- # IFS=, 00:11:03.112 13:40:05 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:03.112 "params": { 00:11:03.112 "name": "Nvme0", 00:11:03.112 "trtype": "tcp", 00:11:03.112 "traddr": "10.0.0.2", 00:11:03.112 "adrfam": "ipv4", 00:11:03.112 "trsvcid": "4420", 00:11:03.112 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:03.112 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:03.112 "hdgst": false, 00:11:03.112 "ddgst": false 00:11:03.112 }, 00:11:03.112 "method": "bdev_nvme_attach_controller" 00:11:03.112 }' 00:11:03.112 [2024-04-18 13:40:05.789505] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:03.112 [2024-04-18 13:40:05.789589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2564129 ] 00:11:03.112 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.112 [2024-04-18 13:40:05.852727] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.373 [2024-04-18 13:40:05.961834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.373 Running I/O for 1 seconds... 00:11:04.749 00:11:04.749 Latency(us) 00:11:04.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.749 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:04.749 Verification LBA range: start 0x0 length 0x400 00:11:04.749 Nvme0n1 : 1.01 1395.30 87.21 0.00 0.00 45174.26 9272.13 41943.04 00:11:04.749 =================================================================================================================== 00:11:04.749 Total : 1395.30 87.21 0.00 0.00 45174.26 9272.13 41943.04 00:11:04.749 13:40:07 -- target/host_management.sh@102 -- # stoptarget 00:11:04.749 13:40:07 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:04.749 13:40:07 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:04.749 13:40:07 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:04.749 13:40:07 -- target/host_management.sh@40 -- # nvmftestfini 00:11:04.749 13:40:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:04.749 13:40:07 -- nvmf/common.sh@117 -- # sync 00:11:04.749 13:40:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:04.749 13:40:07 -- nvmf/common.sh@120 -- # set +e 00:11:04.749 13:40:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:04.749 13:40:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:04.749 rmmod nvme_tcp 00:11:04.749 rmmod nvme_fabrics 00:11:04.749 rmmod nvme_keyring 00:11:04.749 13:40:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:04.749 13:40:07 -- nvmf/common.sh@124 -- # set -e 00:11:04.749 13:40:07 -- nvmf/common.sh@125 -- # return 0 00:11:04.749 13:40:07 -- nvmf/common.sh@478 -- # '[' -n 2563806 ']' 00:11:04.749 13:40:07 -- nvmf/common.sh@479 -- # killprocess 2563806 00:11:04.749 13:40:07 -- common/autotest_common.sh@936 -- # '[' -z 2563806 ']' 00:11:04.749 13:40:07 -- common/autotest_common.sh@940 -- # kill -0 2563806 00:11:04.749 13:40:07 -- common/autotest_common.sh@941 -- # uname 00:11:04.749 13:40:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:04.749 13:40:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2563806 00:11:04.749 13:40:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:04.749 13:40:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:04.749 13:40:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2563806' 00:11:04.749 killing process with pid 2563806 00:11:04.749 13:40:07 -- common/autotest_common.sh@955 -- # kill 2563806 00:11:04.749 13:40:07 -- common/autotest_common.sh@960 -- # wait 2563806 00:11:05.009 [2024-04-18 13:40:07.790043] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:05.268 13:40:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:05.268 13:40:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:05.268 13:40:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:05.268 13:40:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:05.268 13:40:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:05.268 13:40:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.268 13:40:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.268 13:40:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.172 13:40:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:07.172 00:11:07.172 real 0m6.550s 00:11:07.172 user 0m19.184s 00:11:07.172 sys 0m1.308s 00:11:07.172 13:40:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:07.172 13:40:09 -- common/autotest_common.sh@10 -- # set +x 00:11:07.172 ************************************ 00:11:07.172 END TEST nvmf_host_management 00:11:07.172 ************************************ 00:11:07.172 13:40:09 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:07.172 00:11:07.172 real 0m8.780s 00:11:07.172 user 0m19.969s 00:11:07.172 sys 0m2.757s 00:11:07.172 13:40:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:07.172 13:40:09 -- common/autotest_common.sh@10 -- # set +x 00:11:07.172 ************************************ 00:11:07.172 END TEST nvmf_host_management 00:11:07.172 ************************************ 00:11:07.172 13:40:09 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:07.172 13:40:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:07.172 13:40:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:07.172 13:40:09 -- common/autotest_common.sh@10 -- # set +x 00:11:07.430 ************************************ 00:11:07.430 START TEST nvmf_lvol 00:11:07.430 ************************************ 00:11:07.430 13:40:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:07.430 * Looking for test storage... 00:11:07.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.430 13:40:10 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.430 13:40:10 -- nvmf/common.sh@7 -- # uname -s 00:11:07.430 13:40:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.430 13:40:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.430 13:40:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.430 13:40:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.430 13:40:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.430 13:40:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.430 13:40:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.430 13:40:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.430 13:40:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.430 13:40:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.430 13:40:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:07.430 13:40:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:07.430 13:40:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.430 13:40:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.430 13:40:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.430 13:40:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.430 13:40:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.430 13:40:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.430 13:40:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.430 13:40:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.430 13:40:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.430 13:40:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.430 13:40:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.430 13:40:10 -- paths/export.sh@5 -- # export PATH 00:11:07.430 13:40:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.430 13:40:10 -- nvmf/common.sh@47 -- # : 0 00:11:07.430 13:40:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.430 13:40:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.430 13:40:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.430 13:40:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.430 13:40:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.430 13:40:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.430 13:40:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.430 13:40:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.430 13:40:10 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:07.430 13:40:10 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:07.430 13:40:10 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:07.430 13:40:10 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:07.430 13:40:10 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:07.430 13:40:10 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:07.430 13:40:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:07.430 13:40:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.430 13:40:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:07.430 13:40:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:07.430 13:40:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:07.430 13:40:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.430 13:40:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.430 13:40:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.430 13:40:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:07.430 13:40:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:07.430 13:40:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:07.430 13:40:10 -- common/autotest_common.sh@10 -- # set +x 00:11:09.358 13:40:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:09.358 13:40:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:09.358 13:40:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:09.358 13:40:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:09.358 13:40:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:09.358 13:40:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:09.358 13:40:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:09.358 13:40:12 -- nvmf/common.sh@295 -- # net_devs=() 00:11:09.358 13:40:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:09.358 13:40:12 -- nvmf/common.sh@296 -- # e810=() 00:11:09.358 13:40:12 -- nvmf/common.sh@296 -- # local -ga e810 00:11:09.358 13:40:12 -- nvmf/common.sh@297 -- # x722=() 00:11:09.358 13:40:12 -- nvmf/common.sh@297 -- # local -ga x722 00:11:09.358 13:40:12 -- nvmf/common.sh@298 -- # mlx=() 00:11:09.358 13:40:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:09.358 13:40:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.358 13:40:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.358 13:40:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.358 13:40:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.358 13:40:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.358 13:40:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.358 13:40:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.359 13:40:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.359 13:40:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.359 13:40:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.359 13:40:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.359 13:40:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:09.359 13:40:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:09.359 13:40:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:09.359 13:40:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.359 13:40:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:09.359 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:09.359 13:40:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.359 13:40:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:09.359 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:09.359 13:40:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:09.359 13:40:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.359 13:40:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.359 13:40:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:09.359 13:40:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.359 13:40:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:09.359 Found net devices under 0000:84:00.0: cvl_0_0 00:11:09.359 13:40:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.359 13:40:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.359 13:40:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.359 13:40:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:09.359 13:40:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.359 13:40:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:09.359 Found net devices under 0000:84:00.1: cvl_0_1 00:11:09.359 13:40:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.359 13:40:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:09.359 13:40:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:09.359 13:40:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:09.359 13:40:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:09.359 13:40:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.359 13:40:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.359 13:40:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.359 13:40:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:09.359 13:40:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.359 13:40:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.359 13:40:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:09.359 13:40:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.359 13:40:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.359 13:40:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:09.359 13:40:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:09.359 13:40:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.359 13:40:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.634 13:40:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.634 13:40:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.634 13:40:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:09.634 13:40:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.634 13:40:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.634 13:40:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.634 13:40:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:09.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:11:09.634 00:11:09.634 --- 10.0.0.2 ping statistics --- 00:11:09.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.634 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:11:09.634 13:40:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:11:09.634 00:11:09.634 --- 10.0.0.1 ping statistics --- 00:11:09.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.634 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:11:09.634 13:40:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.634 13:40:12 -- nvmf/common.sh@411 -- # return 0 00:11:09.634 13:40:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:09.634 13:40:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.634 13:40:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:09.634 13:40:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:09.634 13:40:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.634 13:40:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:09.634 13:40:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:09.634 13:40:12 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:09.634 13:40:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:09.634 13:40:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:09.634 13:40:12 -- common/autotest_common.sh@10 -- # set +x 00:11:09.634 13:40:12 -- nvmf/common.sh@470 -- # nvmfpid=2566369 00:11:09.634 13:40:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:09.634 13:40:12 -- nvmf/common.sh@471 -- # waitforlisten 2566369 00:11:09.634 13:40:12 -- common/autotest_common.sh@817 -- # '[' -z 2566369 ']' 00:11:09.634 13:40:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.634 13:40:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:09.634 13:40:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.634 13:40:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:09.634 13:40:12 -- common/autotest_common.sh@10 -- # set +x 00:11:09.634 [2024-04-18 13:40:12.296751] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:09.634 [2024-04-18 13:40:12.296840] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.634 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.634 [2024-04-18 13:40:12.359955] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:09.892 [2024-04-18 13:40:12.467269] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.892 [2024-04-18 13:40:12.467322] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.892 [2024-04-18 13:40:12.467345] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.892 [2024-04-18 13:40:12.467357] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.892 [2024-04-18 13:40:12.467367] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.892 [2024-04-18 13:40:12.467449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.892 [2024-04-18 13:40:12.467523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.892 [2024-04-18 13:40:12.467541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.892 13:40:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:09.892 13:40:12 -- common/autotest_common.sh@850 -- # return 0 00:11:09.892 13:40:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:09.892 13:40:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:09.892 13:40:12 -- common/autotest_common.sh@10 -- # set +x 00:11:09.892 13:40:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.892 13:40:12 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:10.150 [2024-04-18 13:40:12.883793] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.150 13:40:12 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.408 13:40:13 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:10.408 13:40:13 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.666 13:40:13 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:10.666 13:40:13 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:10.924 13:40:13 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:11.491 13:40:13 -- target/nvmf_lvol.sh@29 -- # lvs=a3133853-3f1a-4674-91cc-c6e49b9e4514 00:11:11.491 13:40:13 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a3133853-3f1a-4674-91cc-c6e49b9e4514 lvol 20 00:11:11.491 13:40:14 -- target/nvmf_lvol.sh@32 -- # lvol=9807e05b-71b3-411e-99bf-32d932a924b8 00:11:11.491 13:40:14 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:11.749 13:40:14 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9807e05b-71b3-411e-99bf-32d932a924b8 00:11:12.006 13:40:14 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:12.264 [2024-04-18 13:40:14.998117] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.264 13:40:15 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:12.522 13:40:15 -- target/nvmf_lvol.sh@42 -- # perf_pid=2566794 00:11:12.522 13:40:15 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:12.522 13:40:15 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:12.522 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.902 13:40:16 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9807e05b-71b3-411e-99bf-32d932a924b8 MY_SNAPSHOT 00:11:13.902 13:40:16 -- target/nvmf_lvol.sh@47 -- # snapshot=0141150a-43d7-457d-83c1-c78f54c097d6 00:11:13.902 13:40:16 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9807e05b-71b3-411e-99bf-32d932a924b8 30 00:11:14.159 13:40:16 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0141150a-43d7-457d-83c1-c78f54c097d6 MY_CLONE 00:11:14.724 13:40:17 -- target/nvmf_lvol.sh@49 -- # clone=d9bb6572-4f42-4c8e-b4fb-fa137993e84d 00:11:14.724 13:40:17 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d9bb6572-4f42-4c8e-b4fb-fa137993e84d 00:11:15.291 13:40:17 -- target/nvmf_lvol.sh@53 -- # wait 2566794 00:11:23.412 Initializing NVMe Controllers 00:11:23.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:23.412 Controller IO queue size 128, less than required. 00:11:23.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:23.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:23.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:23.412 Initialization complete. Launching workers. 00:11:23.412 ======================================================== 00:11:23.412 Latency(us) 00:11:23.412 Device Information : IOPS MiB/s Average min max 00:11:23.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10603.20 41.42 12079.34 541.56 72921.26 00:11:23.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10462.90 40.87 12237.03 2179.27 73596.08 00:11:23.412 ======================================================== 00:11:23.412 Total : 21066.10 82.29 12157.66 541.56 73596.08 00:11:23.412 00:11:23.412 13:40:25 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:23.412 13:40:25 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9807e05b-71b3-411e-99bf-32d932a924b8 00:11:23.670 13:40:26 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a3133853-3f1a-4674-91cc-c6e49b9e4514 00:11:23.928 13:40:26 -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:23.928 13:40:26 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:23.928 13:40:26 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:23.928 13:40:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:23.928 13:40:26 -- nvmf/common.sh@117 -- # sync 00:11:23.928 13:40:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:23.928 13:40:26 -- nvmf/common.sh@120 -- # set +e 00:11:23.928 13:40:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:23.928 13:40:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:23.928 rmmod nvme_tcp 00:11:23.928 rmmod nvme_fabrics 00:11:23.928 rmmod nvme_keyring 00:11:23.928 13:40:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:23.928 13:40:26 -- nvmf/common.sh@124 -- # set -e 00:11:23.928 13:40:26 -- nvmf/common.sh@125 -- # return 0 00:11:23.928 13:40:26 -- nvmf/common.sh@478 -- # '[' -n 2566369 ']' 00:11:23.928 13:40:26 -- nvmf/common.sh@479 -- # killprocess 2566369 00:11:23.928 13:40:26 -- common/autotest_common.sh@936 -- # '[' -z 2566369 ']' 00:11:23.928 13:40:26 -- common/autotest_common.sh@940 -- # kill -0 2566369 00:11:23.928 13:40:26 -- common/autotest_common.sh@941 -- # uname 00:11:23.928 13:40:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:23.928 13:40:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2566369 00:11:23.928 13:40:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:23.928 13:40:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:23.928 13:40:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2566369' 00:11:23.928 killing process with pid 2566369 00:11:23.928 13:40:26 -- common/autotest_common.sh@955 -- # kill 2566369 00:11:23.928 13:40:26 -- common/autotest_common.sh@960 -- # wait 2566369 00:11:24.187 13:40:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:24.187 13:40:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:24.187 13:40:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:24.187 13:40:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:24.187 13:40:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:24.187 13:40:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.187 13:40:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:24.187 13:40:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.717 13:40:28 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:26.717 00:11:26.717 real 0m18.948s 00:11:26.717 user 1m4.734s 00:11:26.717 sys 0m5.694s 00:11:26.717 13:40:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:26.717 13:40:28 -- common/autotest_common.sh@10 -- # set +x 00:11:26.717 ************************************ 00:11:26.717 END TEST nvmf_lvol 00:11:26.717 ************************************ 00:11:26.717 13:40:28 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:26.717 13:40:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:26.717 13:40:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:26.717 13:40:28 -- common/autotest_common.sh@10 -- # set +x 00:11:26.717 ************************************ 00:11:26.717 START TEST nvmf_lvs_grow 00:11:26.717 ************************************ 00:11:26.717 13:40:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:26.717 * Looking for test storage... 00:11:26.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.717 13:40:29 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.717 13:40:29 -- nvmf/common.sh@7 -- # uname -s 00:11:26.718 13:40:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.718 13:40:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.718 13:40:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.718 13:40:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.718 13:40:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.718 13:40:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.718 13:40:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.718 13:40:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.718 13:40:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.718 13:40:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.718 13:40:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:26.718 13:40:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:26.718 13:40:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.718 13:40:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.718 13:40:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.718 13:40:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.718 13:40:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.718 13:40:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.718 13:40:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.718 13:40:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.718 13:40:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.718 13:40:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.718 13:40:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.718 13:40:29 -- paths/export.sh@5 -- # export PATH 00:11:26.718 13:40:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.718 13:40:29 -- nvmf/common.sh@47 -- # : 0 00:11:26.718 13:40:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.718 13:40:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.718 13:40:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.718 13:40:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.718 13:40:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.718 13:40:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.718 13:40:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.718 13:40:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.718 13:40:29 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:26.718 13:40:29 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:26.718 13:40:29 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:11:26.718 13:40:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:26.718 13:40:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.718 13:40:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:26.718 13:40:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:26.718 13:40:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:26.718 13:40:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.718 13:40:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.718 13:40:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.718 13:40:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:26.718 13:40:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:26.718 13:40:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:26.718 13:40:29 -- common/autotest_common.sh@10 -- # set +x 00:11:28.620 13:40:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:28.620 13:40:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:28.620 13:40:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:28.620 13:40:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:28.620 13:40:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:28.620 13:40:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:28.620 13:40:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:28.620 13:40:31 -- nvmf/common.sh@295 -- # net_devs=() 00:11:28.620 13:40:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:28.620 13:40:31 -- nvmf/common.sh@296 -- # e810=() 00:11:28.620 13:40:31 -- nvmf/common.sh@296 -- # local -ga e810 00:11:28.620 13:40:31 -- nvmf/common.sh@297 -- # x722=() 00:11:28.620 13:40:31 -- nvmf/common.sh@297 -- # local -ga x722 00:11:28.620 13:40:31 -- nvmf/common.sh@298 -- # mlx=() 00:11:28.620 13:40:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:28.620 13:40:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.620 13:40:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.620 13:40:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.620 13:40:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.620 13:40:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.620 13:40:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.620 13:40:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.620 13:40:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.620 13:40:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.620 13:40:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.620 13:40:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.620 13:40:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:28.620 13:40:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:28.620 13:40:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:28.620 13:40:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.620 13:40:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:28.620 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:28.620 13:40:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.620 13:40:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:28.620 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:28.620 13:40:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:28.620 13:40:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.620 13:40:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.620 13:40:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:28.620 13:40:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.620 13:40:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:28.620 Found net devices under 0000:84:00.0: cvl_0_0 00:11:28.620 13:40:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.620 13:40:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.620 13:40:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.620 13:40:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:28.620 13:40:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.620 13:40:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:28.620 Found net devices under 0000:84:00.1: cvl_0_1 00:11:28.620 13:40:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.620 13:40:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:28.620 13:40:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:28.620 13:40:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:28.620 13:40:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:28.620 13:40:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.620 13:40:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.620 13:40:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.620 13:40:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:28.620 13:40:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.620 13:40:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.620 13:40:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:28.620 13:40:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.620 13:40:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.620 13:40:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:28.620 13:40:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:28.620 13:40:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.620 13:40:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.620 13:40:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.620 13:40:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.620 13:40:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:28.620 13:40:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.620 13:40:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.620 13:40:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.620 13:40:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:28.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:11:28.620 00:11:28.620 --- 10.0.0.2 ping statistics --- 00:11:28.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.620 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:11:28.620 13:40:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:11:28.620 00:11:28.620 --- 10.0.0.1 ping statistics --- 00:11:28.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.620 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:11:28.620 13:40:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.620 13:40:31 -- nvmf/common.sh@411 -- # return 0 00:11:28.620 13:40:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:28.620 13:40:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.621 13:40:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:28.621 13:40:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:28.621 13:40:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.621 13:40:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:28.621 13:40:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:28.621 13:40:31 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:11:28.621 13:40:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:28.621 13:40:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:28.621 13:40:31 -- common/autotest_common.sh@10 -- # set +x 00:11:28.621 13:40:31 -- nvmf/common.sh@470 -- # nvmfpid=2570076 00:11:28.621 13:40:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:28.621 13:40:31 -- nvmf/common.sh@471 -- # waitforlisten 2570076 00:11:28.621 13:40:31 -- common/autotest_common.sh@817 -- # '[' -z 2570076 ']' 00:11:28.621 13:40:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.621 13:40:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:28.621 13:40:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.621 13:40:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:28.621 13:40:31 -- common/autotest_common.sh@10 -- # set +x 00:11:28.621 [2024-04-18 13:40:31.298934] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:28.621 [2024-04-18 13:40:31.299030] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.621 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.621 [2024-04-18 13:40:31.364800] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.881 [2024-04-18 13:40:31.472684] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.881 [2024-04-18 13:40:31.472750] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.881 [2024-04-18 13:40:31.472764] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.881 [2024-04-18 13:40:31.472790] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.881 [2024-04-18 13:40:31.472801] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.881 [2024-04-18 13:40:31.472829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.881 13:40:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:28.881 13:40:31 -- common/autotest_common.sh@850 -- # return 0 00:11:28.881 13:40:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:28.881 13:40:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:28.881 13:40:31 -- common/autotest_common.sh@10 -- # set +x 00:11:28.881 13:40:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.881 13:40:31 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:29.140 [2024-04-18 13:40:31.845804] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.140 13:40:31 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:11:29.140 13:40:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:29.140 13:40:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:29.140 13:40:31 -- common/autotest_common.sh@10 -- # set +x 00:11:29.398 ************************************ 00:11:29.398 START TEST lvs_grow_clean 00:11:29.398 ************************************ 00:11:29.398 13:40:31 -- common/autotest_common.sh@1111 -- # lvs_grow 00:11:29.398 13:40:31 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:29.398 13:40:31 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:29.398 13:40:31 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:29.398 13:40:31 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:29.398 13:40:31 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:29.398 13:40:31 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:29.398 13:40:31 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:29.398 13:40:31 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:29.398 13:40:31 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:29.654 13:40:32 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:29.654 13:40:32 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:29.654 13:40:32 -- target/nvmf_lvs_grow.sh@28 -- # lvs=445cac2d-a354-4497-a9d9-f0f4ac0272c8 00:11:29.654 13:40:32 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 445cac2d-a354-4497-a9d9-f0f4ac0272c8 00:11:29.654 13:40:32 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:29.913 13:40:32 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:29.913 13:40:32 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:29.913 13:40:32 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 445cac2d-a354-4497-a9d9-f0f4ac0272c8 lvol 150 00:11:30.174 13:40:32 -- target/nvmf_lvs_grow.sh@33 -- # lvol=47296c98-c532-4c35-9a8f-640a367bd431 00:11:30.174 13:40:32 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:30.174 13:40:32 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:30.456 [2024-04-18 13:40:33.172297] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:30.456 [2024-04-18 13:40:33.172394] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:30.456 true 00:11:30.456 13:40:33 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 445cac2d-a354-4497-a9d9-f0f4ac0272c8 00:11:30.456 13:40:33 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:30.716 13:40:33 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:30.716 13:40:33 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:30.974 13:40:33 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 47296c98-c532-4c35-9a8f-640a367bd431 00:11:31.231 13:40:33 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:31.491 [2024-04-18 13:40:34.175372] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.491 13:40:34 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:31.749 13:40:34 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2570518 00:11:31.749 13:40:34 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:31.749 13:40:34 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:31.749 13:40:34 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2570518 /var/tmp/bdevperf.sock 00:11:31.749 13:40:34 -- common/autotest_common.sh@817 -- # '[' -z 2570518 ']' 00:11:31.749 13:40:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:31.749 13:40:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:31.749 13:40:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:31.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:31.749 13:40:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:31.749 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:11:31.749 [2024-04-18 13:40:34.479065] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:31.749 [2024-04-18 13:40:34.479134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2570518 ] 00:11:31.749 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.749 [2024-04-18 13:40:34.542783] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.007 [2024-04-18 13:40:34.658827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.007 13:40:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:32.007 13:40:34 -- common/autotest_common.sh@850 -- # return 0 00:11:32.007 13:40:34 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:32.267 Nvme0n1 00:11:32.526 13:40:35 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:32.785 [ 00:11:32.785 { 00:11:32.785 "name": "Nvme0n1", 00:11:32.785 "aliases": [ 00:11:32.785 "47296c98-c532-4c35-9a8f-640a367bd431" 00:11:32.785 ], 00:11:32.785 "product_name": "NVMe disk", 00:11:32.785 "block_size": 4096, 00:11:32.785 "num_blocks": 38912, 00:11:32.785 "uuid": "47296c98-c532-4c35-9a8f-640a367bd431", 00:11:32.785 "assigned_rate_limits": { 00:11:32.785 "rw_ios_per_sec": 0, 00:11:32.785 "rw_mbytes_per_sec": 0, 00:11:32.785 "r_mbytes_per_sec": 0, 00:11:32.785 "w_mbytes_per_sec": 0 00:11:32.785 }, 00:11:32.785 "claimed": false, 00:11:32.785 "zoned": false, 00:11:32.785 "supported_io_types": { 00:11:32.785 "read": true, 00:11:32.785 "write": true, 00:11:32.785 "unmap": true, 00:11:32.786 "write_zeroes": true, 00:11:32.786 "flush": true, 00:11:32.786 "reset": true, 00:11:32.786 "compare": true, 00:11:32.786 "compare_and_write": true, 00:11:32.786 "abort": true, 00:11:32.786 "nvme_admin": true, 00:11:32.786 "nvme_io": true 00:11:32.786 }, 00:11:32.786 "memory_domains": [ 00:11:32.786 { 00:11:32.786 "dma_device_id": "system", 00:11:32.786 "dma_device_type": 1 00:11:32.786 } 00:11:32.786 ], 00:11:32.786 "driver_specific": { 00:11:32.786 "nvme": [ 00:11:32.786 { 00:11:32.786 "trid": { 00:11:32.786 "trtype": "TCP", 00:11:32.786 "adrfam": "IPv4", 00:11:32.786 "traddr": "10.0.0.2", 00:11:32.786 "trsvcid": "4420", 00:11:32.786 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:32.786 }, 00:11:32.786 "ctrlr_data": { 00:11:32.786 "cntlid": 1, 00:11:32.786 "vendor_id": "0x8086", 00:11:32.786 "model_number": "SPDK bdev Controller", 00:11:32.786 "serial_number": "SPDK0", 00:11:32.786 "firmware_revision": "24.05", 00:11:32.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:32.786 "oacs": { 00:11:32.786 "security": 0, 00:11:32.786 "format": 0, 00:11:32.786 "firmware": 0, 00:11:32.786 "ns_manage": 0 00:11:32.786 }, 00:11:32.786 "multi_ctrlr": true, 00:11:32.786 "ana_reporting": false 00:11:32.786 }, 00:11:32.786 "vs": { 00:11:32.786 "nvme_version": "1.3" 00:11:32.786 }, 00:11:32.786 "ns_data": { 00:11:32.786 "id": 1, 00:11:32.786 "can_share": true 00:11:32.786 } 00:11:32.786 } 00:11:32.786 ], 00:11:32.786 "mp_policy": "active_passive" 00:11:32.786 } 00:11:32.786 } 00:11:32.786 ] 00:11:32.786 13:40:35 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2570656 00:11:32.786 13:40:35 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:32.786 13:40:35 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:32.786 Running I/O for 10 seconds... 00:11:33.819 Latency(us) 00:11:33.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.819 Nvme0n1 : 1.00 14085.00 55.02 0.00 0.00 0.00 0.00 0.00 00:11:33.819 =================================================================================================================== 00:11:33.819 Total : 14085.00 55.02 0.00 0.00 0.00 0.00 0.00 00:11:33.819 00:11:34.755 13:40:37 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 445cac2d-a354-4497-a9d9-f0f4ac0272c8 00:11:34.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:34.755 Nvme0n1 : 2.00 14308.00 55.89 0.00 0.00 0.00 0.00 0.00 00:11:34.756 =================================================================================================================== 00:11:34.756 Total : 14308.00 55.89 0.00 0.00 0.00 0.00 0.00 00:11:34.756 00:11:35.014 true 00:11:35.014 13:40:37 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 445cac2d-a354-4497-a9d9-f0f4ac0272c8 00:11:35.014 13:40:37 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:35.273 13:40:37 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:35.273 13:40:37 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:35.274 13:40:37 -- target/nvmf_lvs_grow.sh@65 -- # wait 2570656 00:11:35.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.839 Nvme0n1 : 3.00 14425.00 56.35 0.00 0.00 0.00 0.00 0.00 00:11:35.839 =================================================================================================================== 00:11:35.839 Total : 14425.00 56.35 0.00 0.00 0.00 0.00 0.00 00:11:35.839 00:11:36.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.775 Nvme0n1 : 4.00 14594.00 57.01 0.00 0.00 0.00 0.00 0.00 00:11:36.775 =================================================================================================================== 00:11:36.775 Total : 14594.00 57.01 0.00 0.00 0.00 0.00 0.00 00:11:36.775 00:11:37.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.714 Nvme0n1 : 5.00 14617.20 57.10 0.00 0.00 0.00 0.00 0.00 00:11:37.714 =================================================================================================================== 00:11:37.714 Total : 14617.20 57.10 0.00 0.00 0.00 0.00 0.00 00:11:37.714 00:11:39.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.094 Nvme0n1 : 6.00 14746.17 57.60 0.00 0.00 0.00 0.00 0.00 00:11:39.094 =================================================================================================================== 00:11:39.094 Total : 14746.17 57.60 0.00 0.00 0.00 0.00 0.00 00:11:39.094 00:11:40.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.028 Nvme0n1 : 7.00 14759.43 57.65 0.00 0.00 0.00 0.00 0.00 00:11:40.028 =================================================================================================================== 00:11:40.028 Total : 14759.43 57.65 0.00 0.00 0.00 0.00 0.00 00:11:40.028 00:11:40.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.963 Nvme0n1 : 8.00 14758.88 57.65 0.00 0.00 0.00 0.00 0.00 00:11:40.963 =================================================================================================================== 00:11:40.963 Total : 14758.88 57.65 0.00 0.00 0.00 0.00 0.00 00:11:40.963 00:11:41.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:41.896 Nvme0n1 : 9.00 14835.78 57.95 0.00 0.00 0.00 0.00 0.00 00:11:41.896 =================================================================================================================== 00:11:41.897 Total : 14835.78 57.95 0.00 0.00 0.00 0.00 0.00 00:11:41.897 00:11:42.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.832 Nvme0n1 : 10.00 14903.70 58.22 0.00 0.00 0.00 0.00 0.00 00:11:42.832 =================================================================================================================== 00:11:42.832 Total : 14903.70 58.22 0.00 0.00 0.00 0.00 0.00 00:11:42.832 00:11:42.832 00:11:42.832 Latency(us) 00:11:42.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.832 Nvme0n1 : 10.01 14903.89 58.22 0.00 0.00 8583.92 4660.34 19320.98 00:11:42.832 =================================================================================================================== 00:11:42.832 Total : 14903.89 58.22 0.00 0.00 8583.92 4660.34 19320.98 00:11:42.832 0 00:11:42.832 13:40:45 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2570518 00:11:42.832 13:40:45 -- common/autotest_common.sh@936 -- # '[' -z 2570518 ']' 00:11:42.832 13:40:45 -- common/autotest_common.sh@940 -- # kill -0 2570518 00:11:42.832 13:40:45 -- common/autotest_common.sh@941 -- # uname 00:11:42.832 13:40:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:42.832 13:40:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2570518 00:11:42.832 13:40:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:42.832 13:40:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:42.832 13:40:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2570518' 00:11:42.832 killing process with pid 2570518 00:11:42.832 13:40:45 -- common/autotest_common.sh@955 -- # kill 2570518 00:11:42.832 Received shutdown signal, test time was about 10.000000 seconds 00:11:42.832 00:11:42.832 Latency(us) 00:11:42.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.832 =================================================================================================================== 00:11:42.832 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:42.832 13:40:45 -- common/autotest_common.sh@960 -- # wait 2570518 00:11:43.090 13:40:45 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:43.658 13:40:46 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 445cac2d-a354-4497-a9d9-f0f4ac0272c8 00:11:43.658 13:40:46 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:11:43.658 13:40:46 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:11:43.658 13:40:46 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:11:43.658 13:40:46 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:43.915 [2024-04-18 13:40:46.622943] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:43.915 13:40:46 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 445cac2d-a354-4497-a9d9-f0f4ac0272c8 00:11:43.915 13:40:46 -- common/autotest_common.sh@638 -- # local es=0 00:11:43.916 13:40:46 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 445cac2d-a354-4497-a9d9-f0f4ac0272c8 00:11:43.916 13:40:46 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:43.916 13:40:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.916 13:40:46 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:43.916 13:40:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.916 13:40:46 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:43.916 13:40:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.916 13:40:46 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:43.916 13:40:46 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:43.916 13:40:46 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 445cac2d-a354-4497-a9d9-f0f4ac0272c8 00:11:44.175 request: 00:11:44.175 { 00:11:44.175 "uuid": "445cac2d-a354-4497-a9d9-f0f4ac0272c8", 00:11:44.175 "method": "bdev_lvol_get_lvstores", 00:11:44.175 "req_id": 1 00:11:44.175 } 00:11:44.175 Got JSON-RPC error response 00:11:44.175 response: 00:11:44.175 { 00:11:44.175 "code": -19, 00:11:44.175 "message": "No such device" 00:11:44.175 } 00:11:44.175 13:40:46 -- common/autotest_common.sh@641 -- # es=1 00:11:44.175 13:40:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:44.175 13:40:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:44.175 13:40:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:44.175 13:40:46 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:44.435 aio_bdev 00:11:44.435 13:40:47 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 47296c98-c532-4c35-9a8f-640a367bd431 00:11:44.435 13:40:47 -- common/autotest_common.sh@885 -- # local bdev_name=47296c98-c532-4c35-9a8f-640a367bd431 00:11:44.435 13:40:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:44.435 13:40:47 -- common/autotest_common.sh@887 -- # local i 00:11:44.435 13:40:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:44.435 13:40:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:44.435 13:40:47 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:44.694 13:40:47 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 47296c98-c532-4c35-9a8f-640a367bd431 -t 2000 00:11:44.954 [ 00:11:44.954 { 00:11:44.954 "name": "47296c98-c532-4c35-9a8f-640a367bd431", 00:11:44.954 "aliases": [ 00:11:44.954 "lvs/lvol" 00:11:44.954 ], 00:11:44.954 "product_name": "Logical Volume", 00:11:44.954 "block_size": 4096, 00:11:44.954 "num_blocks": 38912, 00:11:44.954 "uuid": "47296c98-c532-4c35-9a8f-640a367bd431", 00:11:44.954 "assigned_rate_limits": { 00:11:44.954 "rw_ios_per_sec": 0, 00:11:44.954 "rw_mbytes_per_sec": 0, 00:11:44.954 "r_mbytes_per_sec": 0, 00:11:44.954 "w_mbytes_per_sec": 0 00:11:44.954 }, 00:11:44.954 "claimed": false, 00:11:44.954 "zoned": false, 00:11:44.954 "supported_io_types": { 00:11:44.954 "read": true, 00:11:44.954 "write": true, 00:11:44.954 "unmap": true, 00:11:44.954 "write_zeroes": true, 00:11:44.954 "flush": false, 00:11:44.954 "reset": true, 00:11:44.954 "compare": false, 00:11:44.954 "compare_and_write": false, 00:11:44.954 "abort": false, 00:11:44.954 "nvme_admin": false, 00:11:44.954 "nvme_io": false 00:11:44.954 }, 00:11:44.954 "driver_specific": { 00:11:44.954 "lvol": { 00:11:44.954 "lvol_store_uuid": "445cac2d-a354-4497-a9d9-f0f4ac0272c8", 00:11:44.954 "base_bdev": "aio_bdev", 00:11:44.954 "thin_provision": false, 00:11:44.954 "snapshot": false, 00:11:44.954 "clone": false, 00:11:44.954 "esnap_clone": false 00:11:44.954 } 00:11:44.954 } 00:11:44.954 } 00:11:44.954 ] 00:11:44.954 13:40:47 -- common/autotest_common.sh@893 -- # return 0 00:11:44.954 13:40:47 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 445cac2d-a354-4497-a9d9-f0f4ac0272c8 00:11:44.954 13:40:47 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:11:45.212 13:40:47 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:11:45.212 13:40:47 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 445cac2d-a354-4497-a9d9-f0f4ac0272c8 00:11:45.212 13:40:47 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:11:45.470 13:40:48 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:11:45.470 13:40:48 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 47296c98-c532-4c35-9a8f-640a367bd431 00:11:45.730 13:40:48 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 445cac2d-a354-4497-a9d9-f0f4ac0272c8 00:11:46.011 13:40:48 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:46.270 13:40:48 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:46.270 00:11:46.270 real 0m16.976s 00:11:46.270 user 0m16.570s 00:11:46.270 sys 0m1.847s 00:11:46.270 13:40:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:46.270 13:40:48 -- common/autotest_common.sh@10 -- # set +x 00:11:46.270 ************************************ 00:11:46.270 END TEST lvs_grow_clean 00:11:46.270 ************************************ 00:11:46.270 13:40:48 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:46.270 13:40:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:46.270 13:40:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:46.270 13:40:48 -- common/autotest_common.sh@10 -- # set +x 00:11:46.270 ************************************ 00:11:46.270 START TEST lvs_grow_dirty 00:11:46.270 ************************************ 00:11:46.270 13:40:49 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:11:46.270 13:40:49 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:46.270 13:40:49 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:46.270 13:40:49 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:46.270 13:40:49 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:46.270 13:40:49 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:46.270 13:40:49 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:46.270 13:40:49 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:46.270 13:40:49 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:46.270 13:40:49 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:46.836 13:40:49 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:46.836 13:40:49 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:46.836 13:40:49 -- target/nvmf_lvs_grow.sh@28 -- # lvs=32b21bda-9068-4fba-b00f-d26cedd9c336 00:11:46.836 13:40:49 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32b21bda-9068-4fba-b00f-d26cedd9c336 00:11:46.836 13:40:49 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:47.093 13:40:49 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:47.093 13:40:49 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:47.093 13:40:49 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 32b21bda-9068-4fba-b00f-d26cedd9c336 lvol 150 00:11:47.353 13:40:50 -- target/nvmf_lvs_grow.sh@33 -- # lvol=3dedb9b7-a1a6-4b2c-9ea7-52653898c335 00:11:47.353 13:40:50 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:47.353 13:40:50 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:47.611 [2024-04-18 13:40:50.383567] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:47.611 [2024-04-18 13:40:50.383664] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:47.611 true 00:11:47.611 13:40:50 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32b21bda-9068-4fba-b00f-d26cedd9c336 00:11:47.611 13:40:50 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:47.869 13:40:50 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:47.869 13:40:50 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:48.439 13:40:50 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3dedb9b7-a1a6-4b2c-9ea7-52653898c335 00:11:48.439 13:40:51 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:48.698 13:40:51 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:48.956 13:40:51 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2572577 00:11:48.956 13:40:51 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:48.956 13:40:51 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:48.956 13:40:51 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2572577 /var/tmp/bdevperf.sock 00:11:48.956 13:40:51 -- common/autotest_common.sh@817 -- # '[' -z 2572577 ']' 00:11:48.956 13:40:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:48.956 13:40:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:48.956 13:40:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:48.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:48.956 13:40:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:48.956 13:40:51 -- common/autotest_common.sh@10 -- # set +x 00:11:49.216 [2024-04-18 13:40:51.764953] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:11:49.216 [2024-04-18 13:40:51.765036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2572577 ] 00:11:49.216 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.216 [2024-04-18 13:40:51.830794] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.216 [2024-04-18 13:40:51.946981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.474 13:40:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:49.474 13:40:52 -- common/autotest_common.sh@850 -- # return 0 00:11:49.474 13:40:52 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:50.043 Nvme0n1 00:11:50.043 13:40:52 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:50.043 [ 00:11:50.043 { 00:11:50.043 "name": "Nvme0n1", 00:11:50.043 "aliases": [ 00:11:50.043 "3dedb9b7-a1a6-4b2c-9ea7-52653898c335" 00:11:50.043 ], 00:11:50.043 "product_name": "NVMe disk", 00:11:50.043 "block_size": 4096, 00:11:50.043 "num_blocks": 38912, 00:11:50.043 "uuid": "3dedb9b7-a1a6-4b2c-9ea7-52653898c335", 00:11:50.043 "assigned_rate_limits": { 00:11:50.043 "rw_ios_per_sec": 0, 00:11:50.043 "rw_mbytes_per_sec": 0, 00:11:50.043 "r_mbytes_per_sec": 0, 00:11:50.043 "w_mbytes_per_sec": 0 00:11:50.043 }, 00:11:50.043 "claimed": false, 00:11:50.043 "zoned": false, 00:11:50.043 "supported_io_types": { 00:11:50.043 "read": true, 00:11:50.043 "write": true, 00:11:50.043 "unmap": true, 00:11:50.043 "write_zeroes": true, 00:11:50.043 "flush": true, 00:11:50.043 "reset": true, 00:11:50.043 "compare": true, 00:11:50.043 "compare_and_write": true, 00:11:50.043 "abort": true, 00:11:50.043 "nvme_admin": true, 00:11:50.043 "nvme_io": true 00:11:50.043 }, 00:11:50.043 "memory_domains": [ 00:11:50.043 { 00:11:50.043 "dma_device_id": "system", 00:11:50.043 "dma_device_type": 1 00:11:50.043 } 00:11:50.043 ], 00:11:50.043 "driver_specific": { 00:11:50.043 "nvme": [ 00:11:50.043 { 00:11:50.043 "trid": { 00:11:50.043 "trtype": "TCP", 00:11:50.043 "adrfam": "IPv4", 00:11:50.043 "traddr": "10.0.0.2", 00:11:50.043 "trsvcid": "4420", 00:11:50.043 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:50.043 }, 00:11:50.043 "ctrlr_data": { 00:11:50.043 "cntlid": 1, 00:11:50.043 "vendor_id": "0x8086", 00:11:50.043 "model_number": "SPDK bdev Controller", 00:11:50.043 "serial_number": "SPDK0", 00:11:50.043 "firmware_revision": "24.05", 00:11:50.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:50.043 "oacs": { 00:11:50.043 "security": 0, 00:11:50.043 "format": 0, 00:11:50.043 "firmware": 0, 00:11:50.043 "ns_manage": 0 00:11:50.043 }, 00:11:50.043 "multi_ctrlr": true, 00:11:50.043 "ana_reporting": false 00:11:50.043 }, 00:11:50.043 "vs": { 00:11:50.043 "nvme_version": "1.3" 00:11:50.043 }, 00:11:50.043 "ns_data": { 00:11:50.043 "id": 1, 00:11:50.043 "can_share": true 00:11:50.043 } 00:11:50.043 } 00:11:50.043 ], 00:11:50.043 "mp_policy": "active_passive" 00:11:50.043 } 00:11:50.043 } 00:11:50.043 ] 00:11:50.043 13:40:52 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2572713 00:11:50.043 13:40:52 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:50.043 13:40:52 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:50.302 Running I/O for 10 seconds... 00:11:51.237 Latency(us) 00:11:51.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:51.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:51.237 Nvme0n1 : 1.00 14136.00 55.22 0.00 0.00 0.00 0.00 0.00 00:11:51.237 =================================================================================================================== 00:11:51.237 Total : 14136.00 55.22 0.00 0.00 0.00 0.00 0.00 00:11:51.237 00:11:52.173 13:40:54 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 32b21bda-9068-4fba-b00f-d26cedd9c336 00:11:52.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.173 Nvme0n1 : 2.00 14309.50 55.90 0.00 0.00 0.00 0.00 0.00 00:11:52.173 =================================================================================================================== 00:11:52.173 Total : 14309.50 55.90 0.00 0.00 0.00 0.00 0.00 00:11:52.173 00:11:52.431 true 00:11:52.431 13:40:55 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32b21bda-9068-4fba-b00f-d26cedd9c336 00:11:52.431 13:40:55 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:52.691 13:40:55 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:52.691 13:40:55 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:52.691 13:40:55 -- target/nvmf_lvs_grow.sh@65 -- # wait 2572713 00:11:53.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.261 Nvme0n1 : 3.00 14549.00 56.83 0.00 0.00 0.00 0.00 0.00 00:11:53.261 =================================================================================================================== 00:11:53.261 Total : 14549.00 56.83 0.00 0.00 0.00 0.00 0.00 00:11:53.261 00:11:54.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.200 Nvme0n1 : 4.00 14743.75 57.59 0.00 0.00 0.00 0.00 0.00 00:11:54.200 =================================================================================================================== 00:11:54.200 Total : 14743.75 57.59 0.00 0.00 0.00 0.00 0.00 00:11:54.200 00:11:55.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.579 Nvme0n1 : 5.00 14811.40 57.86 0.00 0.00 0.00 0.00 0.00 00:11:55.579 =================================================================================================================== 00:11:55.579 Total : 14811.40 57.86 0.00 0.00 0.00 0.00 0.00 00:11:55.579 00:11:56.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.515 Nvme0n1 : 6.00 14834.50 57.95 0.00 0.00 0.00 0.00 0.00 00:11:56.515 =================================================================================================================== 00:11:56.515 Total : 14834.50 57.95 0.00 0.00 0.00 0.00 0.00 00:11:56.515 00:11:57.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.452 Nvme0n1 : 7.00 14826.57 57.92 0.00 0.00 0.00 0.00 0.00 00:11:57.452 =================================================================================================================== 00:11:57.452 Total : 14826.57 57.92 0.00 0.00 0.00 0.00 0.00 00:11:57.452 00:11:58.391 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.391 Nvme0n1 : 8.00 14880.00 58.12 0.00 0.00 0.00 0.00 0.00 00:11:58.391 =================================================================================================================== 00:11:58.391 Total : 14880.00 58.12 0.00 0.00 0.00 0.00 0.00 00:11:58.391 00:11:59.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.328 Nvme0n1 : 9.00 14865.33 58.07 0.00 0.00 0.00 0.00 0.00 00:11:59.328 =================================================================================================================== 00:11:59.328 Total : 14865.33 58.07 0.00 0.00 0.00 0.00 0.00 00:11:59.328 00:12:00.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.263 Nvme0n1 : 10.00 14936.50 58.35 0.00 0.00 0.00 0.00 0.00 00:12:00.263 =================================================================================================================== 00:12:00.263 Total : 14936.50 58.35 0.00 0.00 0.00 0.00 0.00 00:12:00.263 00:12:00.263 00:12:00.263 Latency(us) 00:12:00.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.263 Nvme0n1 : 10.00 14938.03 58.35 0.00 0.00 8563.56 3592.34 17087.91 00:12:00.263 =================================================================================================================== 00:12:00.263 Total : 14938.03 58.35 0.00 0.00 8563.56 3592.34 17087.91 00:12:00.263 0 00:12:00.263 13:41:03 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2572577 00:12:00.263 13:41:03 -- common/autotest_common.sh@936 -- # '[' -z 2572577 ']' 00:12:00.263 13:41:03 -- common/autotest_common.sh@940 -- # kill -0 2572577 00:12:00.263 13:41:03 -- common/autotest_common.sh@941 -- # uname 00:12:00.263 13:41:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:00.263 13:41:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2572577 00:12:00.263 13:41:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:00.263 13:41:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:00.263 13:41:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2572577' 00:12:00.263 killing process with pid 2572577 00:12:00.263 13:41:03 -- common/autotest_common.sh@955 -- # kill 2572577 00:12:00.263 Received shutdown signal, test time was about 10.000000 seconds 00:12:00.263 00:12:00.263 Latency(us) 00:12:00.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.263 =================================================================================================================== 00:12:00.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:00.263 13:41:03 -- common/autotest_common.sh@960 -- # wait 2572577 00:12:00.521 13:41:03 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:01.089 13:41:03 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32b21bda-9068-4fba-b00f-d26cedd9c336 00:12:01.089 13:41:03 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:12:01.089 13:41:03 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:12:01.089 13:41:03 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:12:01.089 13:41:03 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2570076 00:12:01.089 13:41:03 -- target/nvmf_lvs_grow.sh@74 -- # wait 2570076 00:12:01.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2570076 Killed "${NVMF_APP[@]}" "$@" 00:12:01.089 13:41:03 -- target/nvmf_lvs_grow.sh@74 -- # true 00:12:01.089 13:41:03 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:12:01.089 13:41:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:01.089 13:41:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:01.089 13:41:03 -- common/autotest_common.sh@10 -- # set +x 00:12:01.089 13:41:03 -- nvmf/common.sh@470 -- # nvmfpid=2574043 00:12:01.089 13:41:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:01.089 13:41:03 -- nvmf/common.sh@471 -- # waitforlisten 2574043 00:12:01.089 13:41:03 -- common/autotest_common.sh@817 -- # '[' -z 2574043 ']' 00:12:01.089 13:41:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.089 13:41:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:01.089 13:41:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.089 13:41:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:01.089 13:41:03 -- common/autotest_common.sh@10 -- # set +x 00:12:01.348 [2024-04-18 13:41:03.935572] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:12:01.348 [2024-04-18 13:41:03.935656] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.349 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.349 [2024-04-18 13:41:04.001478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.349 [2024-04-18 13:41:04.108784] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.349 [2024-04-18 13:41:04.108853] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.349 [2024-04-18 13:41:04.108866] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.349 [2024-04-18 13:41:04.108878] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.349 [2024-04-18 13:41:04.108888] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.349 [2024-04-18 13:41:04.108915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.643 13:41:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:01.643 13:41:04 -- common/autotest_common.sh@850 -- # return 0 00:12:01.643 13:41:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:01.643 13:41:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:01.643 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:12:01.643 13:41:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.643 13:41:04 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:01.923 [2024-04-18 13:41:04.523815] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:01.923 [2024-04-18 13:41:04.523948] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:01.923 [2024-04-18 13:41:04.524004] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:01.923 13:41:04 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:12:01.923 13:41:04 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 3dedb9b7-a1a6-4b2c-9ea7-52653898c335 00:12:01.923 13:41:04 -- common/autotest_common.sh@885 -- # local bdev_name=3dedb9b7-a1a6-4b2c-9ea7-52653898c335 00:12:01.923 13:41:04 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:01.923 13:41:04 -- common/autotest_common.sh@887 -- # local i 00:12:01.923 13:41:04 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:01.923 13:41:04 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:01.923 13:41:04 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:02.181 13:41:04 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3dedb9b7-a1a6-4b2c-9ea7-52653898c335 -t 2000 00:12:02.439 [ 00:12:02.439 { 00:12:02.439 "name": "3dedb9b7-a1a6-4b2c-9ea7-52653898c335", 00:12:02.439 "aliases": [ 00:12:02.439 "lvs/lvol" 00:12:02.439 ], 00:12:02.439 "product_name": "Logical Volume", 00:12:02.439 "block_size": 4096, 00:12:02.439 "num_blocks": 38912, 00:12:02.439 "uuid": "3dedb9b7-a1a6-4b2c-9ea7-52653898c335", 00:12:02.439 "assigned_rate_limits": { 00:12:02.439 "rw_ios_per_sec": 0, 00:12:02.439 "rw_mbytes_per_sec": 0, 00:12:02.439 "r_mbytes_per_sec": 0, 00:12:02.439 "w_mbytes_per_sec": 0 00:12:02.439 }, 00:12:02.439 "claimed": false, 00:12:02.439 "zoned": false, 00:12:02.439 "supported_io_types": { 00:12:02.439 "read": true, 00:12:02.439 "write": true, 00:12:02.439 "unmap": true, 00:12:02.439 "write_zeroes": true, 00:12:02.439 "flush": false, 00:12:02.439 "reset": true, 00:12:02.439 "compare": false, 00:12:02.439 "compare_and_write": false, 00:12:02.439 "abort": false, 00:12:02.439 "nvme_admin": false, 00:12:02.439 "nvme_io": false 00:12:02.439 }, 00:12:02.439 "driver_specific": { 00:12:02.439 "lvol": { 00:12:02.439 "lvol_store_uuid": "32b21bda-9068-4fba-b00f-d26cedd9c336", 00:12:02.439 "base_bdev": "aio_bdev", 00:12:02.439 "thin_provision": false, 00:12:02.439 "snapshot": false, 00:12:02.439 "clone": false, 00:12:02.439 "esnap_clone": false 00:12:02.439 } 00:12:02.439 } 00:12:02.439 } 00:12:02.439 ] 00:12:02.439 13:41:05 -- common/autotest_common.sh@893 -- # return 0 00:12:02.439 13:41:05 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32b21bda-9068-4fba-b00f-d26cedd9c336 00:12:02.439 13:41:05 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:12:02.698 13:41:05 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:12:02.698 13:41:05 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32b21bda-9068-4fba-b00f-d26cedd9c336 00:12:02.698 13:41:05 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:12:02.960 13:41:05 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:12:02.960 13:41:05 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:03.218 [2024-04-18 13:41:05.840939] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:03.218 13:41:05 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32b21bda-9068-4fba-b00f-d26cedd9c336 00:12:03.218 13:41:05 -- common/autotest_common.sh@638 -- # local es=0 00:12:03.218 13:41:05 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32b21bda-9068-4fba-b00f-d26cedd9c336 00:12:03.218 13:41:05 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.218 13:41:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:03.218 13:41:05 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.218 13:41:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:03.218 13:41:05 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.218 13:41:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:03.218 13:41:05 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.218 13:41:05 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:03.218 13:41:05 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32b21bda-9068-4fba-b00f-d26cedd9c336 00:12:03.476 request: 00:12:03.476 { 00:12:03.476 "uuid": "32b21bda-9068-4fba-b00f-d26cedd9c336", 00:12:03.476 "method": "bdev_lvol_get_lvstores", 00:12:03.476 "req_id": 1 00:12:03.476 } 00:12:03.476 Got JSON-RPC error response 00:12:03.476 response: 00:12:03.476 { 00:12:03.476 "code": -19, 00:12:03.476 "message": "No such device" 00:12:03.476 } 00:12:03.477 13:41:06 -- common/autotest_common.sh@641 -- # es=1 00:12:03.477 13:41:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:03.477 13:41:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:03.477 13:41:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:03.477 13:41:06 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:03.735 aio_bdev 00:12:03.735 13:41:06 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 3dedb9b7-a1a6-4b2c-9ea7-52653898c335 00:12:03.735 13:41:06 -- common/autotest_common.sh@885 -- # local bdev_name=3dedb9b7-a1a6-4b2c-9ea7-52653898c335 00:12:03.735 13:41:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:03.735 13:41:06 -- common/autotest_common.sh@887 -- # local i 00:12:03.735 13:41:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:03.735 13:41:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:03.735 13:41:06 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:03.994 13:41:06 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3dedb9b7-a1a6-4b2c-9ea7-52653898c335 -t 2000 00:12:04.252 [ 00:12:04.252 { 00:12:04.252 "name": "3dedb9b7-a1a6-4b2c-9ea7-52653898c335", 00:12:04.252 "aliases": [ 00:12:04.252 "lvs/lvol" 00:12:04.252 ], 00:12:04.252 "product_name": "Logical Volume", 00:12:04.252 "block_size": 4096, 00:12:04.252 "num_blocks": 38912, 00:12:04.252 "uuid": "3dedb9b7-a1a6-4b2c-9ea7-52653898c335", 00:12:04.252 "assigned_rate_limits": { 00:12:04.252 "rw_ios_per_sec": 0, 00:12:04.252 "rw_mbytes_per_sec": 0, 00:12:04.252 "r_mbytes_per_sec": 0, 00:12:04.252 "w_mbytes_per_sec": 0 00:12:04.252 }, 00:12:04.252 "claimed": false, 00:12:04.252 "zoned": false, 00:12:04.252 "supported_io_types": { 00:12:04.252 "read": true, 00:12:04.252 "write": true, 00:12:04.252 "unmap": true, 00:12:04.252 "write_zeroes": true, 00:12:04.252 "flush": false, 00:12:04.252 "reset": true, 00:12:04.252 "compare": false, 00:12:04.252 "compare_and_write": false, 00:12:04.252 "abort": false, 00:12:04.252 "nvme_admin": false, 00:12:04.252 "nvme_io": false 00:12:04.252 }, 00:12:04.252 "driver_specific": { 00:12:04.252 "lvol": { 00:12:04.252 "lvol_store_uuid": "32b21bda-9068-4fba-b00f-d26cedd9c336", 00:12:04.252 "base_bdev": "aio_bdev", 00:12:04.252 "thin_provision": false, 00:12:04.252 "snapshot": false, 00:12:04.252 "clone": false, 00:12:04.252 "esnap_clone": false 00:12:04.252 } 00:12:04.252 } 00:12:04.252 } 00:12:04.252 ] 00:12:04.252 13:41:06 -- common/autotest_common.sh@893 -- # return 0 00:12:04.252 13:41:06 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32b21bda-9068-4fba-b00f-d26cedd9c336 00:12:04.252 13:41:06 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:12:04.511 13:41:07 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:12:04.511 13:41:07 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32b21bda-9068-4fba-b00f-d26cedd9c336 00:12:04.511 13:41:07 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:12:04.770 13:41:07 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:12:04.770 13:41:07 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3dedb9b7-a1a6-4b2c-9ea7-52653898c335 00:12:05.028 13:41:07 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32b21bda-9068-4fba-b00f-d26cedd9c336 00:12:05.286 13:41:07 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:05.543 13:41:08 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:05.543 00:12:05.543 real 0m19.230s 00:12:05.543 user 0m47.932s 00:12:05.543 sys 0m4.899s 00:12:05.543 13:41:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:05.543 13:41:08 -- common/autotest_common.sh@10 -- # set +x 00:12:05.543 ************************************ 00:12:05.543 END TEST lvs_grow_dirty 00:12:05.543 ************************************ 00:12:05.543 13:41:08 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:05.543 13:41:08 -- common/autotest_common.sh@794 -- # type=--id 00:12:05.543 13:41:08 -- common/autotest_common.sh@795 -- # id=0 00:12:05.543 13:41:08 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:12:05.543 13:41:08 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:05.543 13:41:08 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:12:05.543 13:41:08 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:12:05.543 13:41:08 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:12:05.543 13:41:08 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:05.543 nvmf_trace.0 00:12:05.543 13:41:08 -- common/autotest_common.sh@809 -- # return 0 00:12:05.543 13:41:08 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:05.543 13:41:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:05.544 13:41:08 -- nvmf/common.sh@117 -- # sync 00:12:05.544 13:41:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:05.544 13:41:08 -- nvmf/common.sh@120 -- # set +e 00:12:05.544 13:41:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:05.544 13:41:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:05.544 rmmod nvme_tcp 00:12:05.803 rmmod nvme_fabrics 00:12:05.803 rmmod nvme_keyring 00:12:05.803 13:41:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:05.803 13:41:08 -- nvmf/common.sh@124 -- # set -e 00:12:05.803 13:41:08 -- nvmf/common.sh@125 -- # return 0 00:12:05.803 13:41:08 -- nvmf/common.sh@478 -- # '[' -n 2574043 ']' 00:12:05.803 13:41:08 -- nvmf/common.sh@479 -- # killprocess 2574043 00:12:05.803 13:41:08 -- common/autotest_common.sh@936 -- # '[' -z 2574043 ']' 00:12:05.803 13:41:08 -- common/autotest_common.sh@940 -- # kill -0 2574043 00:12:05.803 13:41:08 -- common/autotest_common.sh@941 -- # uname 00:12:05.803 13:41:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:05.803 13:41:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2574043 00:12:05.803 13:41:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:05.803 13:41:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:05.803 13:41:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2574043' 00:12:05.803 killing process with pid 2574043 00:12:05.803 13:41:08 -- common/autotest_common.sh@955 -- # kill 2574043 00:12:05.803 13:41:08 -- common/autotest_common.sh@960 -- # wait 2574043 00:12:06.062 13:41:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:06.062 13:41:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:06.062 13:41:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:06.062 13:41:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:06.062 13:41:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:06.062 13:41:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.062 13:41:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.062 13:41:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.965 13:41:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:07.965 00:12:07.965 real 0m41.687s 00:12:07.965 user 1m10.358s 00:12:07.965 sys 0m8.692s 00:12:07.965 13:41:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:07.965 13:41:10 -- common/autotest_common.sh@10 -- # set +x 00:12:07.965 ************************************ 00:12:07.965 END TEST nvmf_lvs_grow 00:12:07.965 ************************************ 00:12:08.224 13:41:10 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:08.224 13:41:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:08.224 13:41:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:08.224 13:41:10 -- common/autotest_common.sh@10 -- # set +x 00:12:08.224 ************************************ 00:12:08.224 START TEST nvmf_bdev_io_wait 00:12:08.224 ************************************ 00:12:08.224 13:41:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:08.224 * Looking for test storage... 00:12:08.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.224 13:41:10 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.224 13:41:10 -- nvmf/common.sh@7 -- # uname -s 00:12:08.224 13:41:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.224 13:41:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.224 13:41:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.224 13:41:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.224 13:41:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.224 13:41:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.224 13:41:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.224 13:41:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.224 13:41:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.224 13:41:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.224 13:41:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:08.224 13:41:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:08.224 13:41:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.224 13:41:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.224 13:41:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.224 13:41:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.224 13:41:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.224 13:41:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.224 13:41:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.224 13:41:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.224 13:41:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.224 13:41:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.224 13:41:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.224 13:41:10 -- paths/export.sh@5 -- # export PATH 00:12:08.224 13:41:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.224 13:41:10 -- nvmf/common.sh@47 -- # : 0 00:12:08.224 13:41:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:08.224 13:41:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:08.224 13:41:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.224 13:41:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.224 13:41:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.224 13:41:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:08.224 13:41:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:08.224 13:41:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:08.224 13:41:10 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:08.224 13:41:10 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:08.224 13:41:10 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:08.224 13:41:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:08.224 13:41:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.224 13:41:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:08.224 13:41:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:08.224 13:41:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:08.224 13:41:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.224 13:41:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:08.224 13:41:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.224 13:41:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:08.224 13:41:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:08.224 13:41:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:08.224 13:41:10 -- common/autotest_common.sh@10 -- # set +x 00:12:10.128 13:41:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:10.128 13:41:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:10.128 13:41:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:10.128 13:41:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:10.128 13:41:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:10.128 13:41:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:10.128 13:41:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:10.128 13:41:12 -- nvmf/common.sh@295 -- # net_devs=() 00:12:10.128 13:41:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:10.128 13:41:12 -- nvmf/common.sh@296 -- # e810=() 00:12:10.128 13:41:12 -- nvmf/common.sh@296 -- # local -ga e810 00:12:10.128 13:41:12 -- nvmf/common.sh@297 -- # x722=() 00:12:10.128 13:41:12 -- nvmf/common.sh@297 -- # local -ga x722 00:12:10.128 13:41:12 -- nvmf/common.sh@298 -- # mlx=() 00:12:10.128 13:41:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:10.128 13:41:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.128 13:41:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.128 13:41:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.128 13:41:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.128 13:41:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.128 13:41:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.128 13:41:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.128 13:41:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.128 13:41:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.128 13:41:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.128 13:41:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.128 13:41:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:10.128 13:41:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:10.128 13:41:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:10.128 13:41:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:10.128 13:41:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:10.128 13:41:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:10.128 13:41:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:10.128 13:41:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:10.128 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:10.128 13:41:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:10.128 13:41:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:10.128 13:41:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.128 13:41:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.128 13:41:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:10.128 13:41:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:10.128 13:41:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:10.128 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:10.128 13:41:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:10.128 13:41:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:10.128 13:41:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.128 13:41:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.128 13:41:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:10.128 13:41:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:10.128 13:41:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:10.128 13:41:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:10.128 13:41:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:10.128 13:41:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.129 13:41:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:10.129 13:41:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.129 13:41:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:10.129 Found net devices under 0000:84:00.0: cvl_0_0 00:12:10.129 13:41:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.129 13:41:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:10.129 13:41:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.129 13:41:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:10.129 13:41:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.129 13:41:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:10.129 Found net devices under 0000:84:00.1: cvl_0_1 00:12:10.129 13:41:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.129 13:41:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:10.129 13:41:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:10.129 13:41:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:10.129 13:41:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:10.129 13:41:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:10.129 13:41:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.129 13:41:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.129 13:41:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.129 13:41:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:10.129 13:41:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.129 13:41:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.129 13:41:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:10.129 13:41:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.129 13:41:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.129 13:41:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:10.129 13:41:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:10.129 13:41:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.129 13:41:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:10.390 13:41:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:10.390 13:41:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:10.390 13:41:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:10.390 13:41:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:10.390 13:41:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:10.390 13:41:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:10.390 13:41:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:10.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:12:10.390 00:12:10.390 --- 10.0.0.2 ping statistics --- 00:12:10.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.390 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:12:10.390 13:41:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:10.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:12:10.390 00:12:10.390 --- 10.0.0.1 ping statistics --- 00:12:10.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.390 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:10.390 13:41:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.390 13:41:13 -- nvmf/common.sh@411 -- # return 0 00:12:10.390 13:41:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:10.390 13:41:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.390 13:41:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:10.390 13:41:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:10.390 13:41:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.390 13:41:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:10.390 13:41:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:10.390 13:41:13 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:10.390 13:41:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:10.390 13:41:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:10.390 13:41:13 -- common/autotest_common.sh@10 -- # set +x 00:12:10.390 13:41:13 -- nvmf/common.sh@470 -- # nvmfpid=2576591 00:12:10.390 13:41:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:10.390 13:41:13 -- nvmf/common.sh@471 -- # waitforlisten 2576591 00:12:10.390 13:41:13 -- common/autotest_common.sh@817 -- # '[' -z 2576591 ']' 00:12:10.390 13:41:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.390 13:41:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:10.390 13:41:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.390 13:41:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:10.390 13:41:13 -- common/autotest_common.sh@10 -- # set +x 00:12:10.390 [2024-04-18 13:41:13.103905] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:12:10.390 [2024-04-18 13:41:13.103990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.390 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.390 [2024-04-18 13:41:13.171590] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.650 [2024-04-18 13:41:13.284322] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.650 [2024-04-18 13:41:13.284375] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.650 [2024-04-18 13:41:13.284388] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.650 [2024-04-18 13:41:13.284400] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.650 [2024-04-18 13:41:13.284410] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.650 [2024-04-18 13:41:13.284498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.650 [2024-04-18 13:41:13.284566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.650 [2024-04-18 13:41:13.284629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.650 [2024-04-18 13:41:13.284632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.650 13:41:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:10.650 13:41:13 -- common/autotest_common.sh@850 -- # return 0 00:12:10.650 13:41:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:10.650 13:41:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:10.650 13:41:13 -- common/autotest_common.sh@10 -- # set +x 00:12:10.650 13:41:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.650 13:41:13 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:10.650 13:41:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.650 13:41:13 -- common/autotest_common.sh@10 -- # set +x 00:12:10.650 13:41:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.650 13:41:13 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:10.650 13:41:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.650 13:41:13 -- common/autotest_common.sh@10 -- # set +x 00:12:10.650 13:41:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.650 13:41:13 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:10.650 13:41:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.650 13:41:13 -- common/autotest_common.sh@10 -- # set +x 00:12:10.650 [2024-04-18 13:41:13.418738] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.650 13:41:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.650 13:41:13 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:10.651 13:41:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.651 13:41:13 -- common/autotest_common.sh@10 -- # set +x 00:12:10.909 Malloc0 00:12:10.909 13:41:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:10.909 13:41:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.909 13:41:13 -- common/autotest_common.sh@10 -- # set +x 00:12:10.909 13:41:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:10.909 13:41:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.909 13:41:13 -- common/autotest_common.sh@10 -- # set +x 00:12:10.909 13:41:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.909 13:41:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.909 13:41:13 -- common/autotest_common.sh@10 -- # set +x 00:12:10.909 [2024-04-18 13:41:13.490030] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.909 13:41:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2576624 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:10.909 13:41:13 -- nvmf/common.sh@521 -- # config=() 00:12:10.909 13:41:13 -- nvmf/common.sh@521 -- # local subsystem config 00:12:10.909 13:41:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:10.909 13:41:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:10.909 { 00:12:10.909 "params": { 00:12:10.909 "name": "Nvme$subsystem", 00:12:10.909 "trtype": "$TEST_TRANSPORT", 00:12:10.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:10.909 "adrfam": "ipv4", 00:12:10.909 "trsvcid": "$NVMF_PORT", 00:12:10.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:10.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:10.909 "hdgst": ${hdgst:-false}, 00:12:10.909 "ddgst": ${ddgst:-false} 00:12:10.909 }, 00:12:10.909 "method": "bdev_nvme_attach_controller" 00:12:10.909 } 00:12:10.909 EOF 00:12:10.909 )") 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@30 -- # READ_PID=2576626 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:10.909 13:41:13 -- nvmf/common.sh@521 -- # config=() 00:12:10.909 13:41:13 -- nvmf/common.sh@521 -- # local subsystem config 00:12:10.909 13:41:13 -- nvmf/common.sh@543 -- # cat 00:12:10.909 13:41:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2576629 00:12:10.909 13:41:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:10.909 { 00:12:10.909 "params": { 00:12:10.909 "name": "Nvme$subsystem", 00:12:10.909 "trtype": "$TEST_TRANSPORT", 00:12:10.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:10.909 "adrfam": "ipv4", 00:12:10.909 "trsvcid": "$NVMF_PORT", 00:12:10.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:10.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:10.909 "hdgst": ${hdgst:-false}, 00:12:10.909 "ddgst": ${ddgst:-false} 00:12:10.909 }, 00:12:10.909 "method": "bdev_nvme_attach_controller" 00:12:10.909 } 00:12:10.909 EOF 00:12:10.909 )") 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:10.909 13:41:13 -- nvmf/common.sh@521 -- # config=() 00:12:10.909 13:41:13 -- nvmf/common.sh@521 -- # local subsystem config 00:12:10.909 13:41:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:10.909 13:41:13 -- nvmf/common.sh@543 -- # cat 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2576633 00:12:10.909 13:41:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:10.909 { 00:12:10.909 "params": { 00:12:10.909 "name": "Nvme$subsystem", 00:12:10.909 "trtype": "$TEST_TRANSPORT", 00:12:10.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:10.909 "adrfam": "ipv4", 00:12:10.909 "trsvcid": "$NVMF_PORT", 00:12:10.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:10.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:10.909 "hdgst": ${hdgst:-false}, 00:12:10.909 "ddgst": ${ddgst:-false} 00:12:10.909 }, 00:12:10.909 "method": "bdev_nvme_attach_controller" 00:12:10.909 } 00:12:10.909 EOF 00:12:10.909 )") 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@35 -- # sync 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:10.909 13:41:13 -- nvmf/common.sh@521 -- # config=() 00:12:10.909 13:41:13 -- nvmf/common.sh@521 -- # local subsystem config 00:12:10.909 13:41:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:10.909 13:41:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:10.909 { 00:12:10.909 "params": { 00:12:10.909 "name": "Nvme$subsystem", 00:12:10.909 "trtype": "$TEST_TRANSPORT", 00:12:10.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:10.909 "adrfam": "ipv4", 00:12:10.909 "trsvcid": "$NVMF_PORT", 00:12:10.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:10.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:10.909 "hdgst": ${hdgst:-false}, 00:12:10.909 "ddgst": ${ddgst:-false} 00:12:10.909 }, 00:12:10.909 "method": "bdev_nvme_attach_controller" 00:12:10.909 } 00:12:10.909 EOF 00:12:10.909 )") 00:12:10.909 13:41:13 -- nvmf/common.sh@543 -- # cat 00:12:10.909 13:41:13 -- nvmf/common.sh@545 -- # jq . 00:12:10.909 13:41:13 -- nvmf/common.sh@543 -- # cat 00:12:10.909 13:41:13 -- target/bdev_io_wait.sh@37 -- # wait 2576624 00:12:10.909 13:41:13 -- nvmf/common.sh@545 -- # jq . 00:12:10.909 13:41:13 -- nvmf/common.sh@546 -- # IFS=, 00:12:10.909 13:41:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:10.909 "params": { 00:12:10.909 "name": "Nvme1", 00:12:10.909 "trtype": "tcp", 00:12:10.909 "traddr": "10.0.0.2", 00:12:10.909 "adrfam": "ipv4", 00:12:10.909 "trsvcid": "4420", 00:12:10.909 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.909 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:10.909 "hdgst": false, 00:12:10.909 "ddgst": false 00:12:10.909 }, 00:12:10.909 "method": "bdev_nvme_attach_controller" 00:12:10.909 }' 00:12:10.909 13:41:13 -- nvmf/common.sh@545 -- # jq . 00:12:10.909 13:41:13 -- nvmf/common.sh@546 -- # IFS=, 00:12:10.909 13:41:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:10.909 "params": { 00:12:10.909 "name": "Nvme1", 00:12:10.909 "trtype": "tcp", 00:12:10.909 "traddr": "10.0.0.2", 00:12:10.909 "adrfam": "ipv4", 00:12:10.909 "trsvcid": "4420", 00:12:10.909 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.909 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:10.909 "hdgst": false, 00:12:10.909 "ddgst": false 00:12:10.909 }, 00:12:10.909 "method": "bdev_nvme_attach_controller" 00:12:10.909 }' 00:12:10.909 13:41:13 -- nvmf/common.sh@545 -- # jq . 00:12:10.909 13:41:13 -- nvmf/common.sh@546 -- # IFS=, 00:12:10.909 13:41:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:10.909 "params": { 00:12:10.909 "name": "Nvme1", 00:12:10.909 "trtype": "tcp", 00:12:10.909 "traddr": "10.0.0.2", 00:12:10.909 "adrfam": "ipv4", 00:12:10.909 "trsvcid": "4420", 00:12:10.909 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.909 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:10.909 "hdgst": false, 00:12:10.909 "ddgst": false 00:12:10.909 }, 00:12:10.909 "method": "bdev_nvme_attach_controller" 00:12:10.909 }' 00:12:10.909 13:41:13 -- nvmf/common.sh@546 -- # IFS=, 00:12:10.909 13:41:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:10.909 "params": { 00:12:10.909 "name": "Nvme1", 00:12:10.909 "trtype": "tcp", 00:12:10.909 "traddr": "10.0.0.2", 00:12:10.909 "adrfam": "ipv4", 00:12:10.909 "trsvcid": "4420", 00:12:10.909 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.909 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:10.909 "hdgst": false, 00:12:10.909 "ddgst": false 00:12:10.909 }, 00:12:10.909 "method": "bdev_nvme_attach_controller" 00:12:10.909 }' 00:12:10.909 [2024-04-18 13:41:13.536312] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:12:10.909 [2024-04-18 13:41:13.536312] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:12:10.909 [2024-04-18 13:41:13.536400] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-18 13:41:13.536401] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:10.909 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:10.909 [2024-04-18 13:41:13.537566] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:12:10.910 [2024-04-18 13:41:13.537642] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:10.910 [2024-04-18 13:41:13.546528] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:12:10.910 [2024-04-18 13:41:13.546602] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:10.910 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.910 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.910 [2024-04-18 13:41:13.715033] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.169 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.169 [2024-04-18 13:41:13.816424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:11.169 [2024-04-18 13:41:13.824498] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.169 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.169 [2024-04-18 13:41:13.895181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.169 [2024-04-18 13:41:13.923972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:11.169 [2024-04-18 13:41:13.968352] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.427 [2024-04-18 13:41:13.989316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:12:11.427 [2024-04-18 13:41:14.060830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:11.427 Running I/O for 1 seconds... 00:12:11.684 Running I/O for 1 seconds... 00:12:11.684 Running I/O for 1 seconds... 00:12:11.684 Running I/O for 1 seconds... 00:12:12.619 00:12:12.619 Latency(us) 00:12:12.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.619 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:12.619 Nvme1n1 : 1.01 10631.57 41.53 0.00 0.00 11991.71 6990.51 20874.43 00:12:12.619 =================================================================================================================== 00:12:12.619 Total : 10631.57 41.53 0.00 0.00 11991.71 6990.51 20874.43 00:12:12.619 00:12:12.619 Latency(us) 00:12:12.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.619 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:12.619 Nvme1n1 : 1.00 205069.35 801.05 0.00 0.00 621.84 245.76 758.52 00:12:12.619 =================================================================================================================== 00:12:12.619 Total : 205069.35 801.05 0.00 0.00 621.84 245.76 758.52 00:12:12.619 00:12:12.619 Latency(us) 00:12:12.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.619 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:12.619 Nvme1n1 : 1.01 8462.43 33.06 0.00 0.00 15050.68 9223.59 23592.96 00:12:12.619 =================================================================================================================== 00:12:12.619 Total : 8462.43 33.06 0.00 0.00 15050.68 9223.59 23592.96 00:12:12.619 00:12:12.619 Latency(us) 00:12:12.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.619 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:12.619 Nvme1n1 : 1.01 9235.66 36.08 0.00 0.00 13802.70 5922.51 23884.23 00:12:12.619 =================================================================================================================== 00:12:12.619 Total : 9235.66 36.08 0.00 0.00 13802.70 5922.51 23884.23 00:12:12.877 13:41:15 -- target/bdev_io_wait.sh@38 -- # wait 2576626 00:12:12.877 13:41:15 -- target/bdev_io_wait.sh@39 -- # wait 2576629 00:12:12.877 13:41:15 -- target/bdev_io_wait.sh@40 -- # wait 2576633 00:12:13.137 13:41:15 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.137 13:41:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:13.137 13:41:15 -- common/autotest_common.sh@10 -- # set +x 00:12:13.137 13:41:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:13.137 13:41:15 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:13.137 13:41:15 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:13.137 13:41:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:13.137 13:41:15 -- nvmf/common.sh@117 -- # sync 00:12:13.137 13:41:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:13.137 13:41:15 -- nvmf/common.sh@120 -- # set +e 00:12:13.137 13:41:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:13.137 13:41:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:13.137 rmmod nvme_tcp 00:12:13.137 rmmod nvme_fabrics 00:12:13.137 rmmod nvme_keyring 00:12:13.137 13:41:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:13.137 13:41:15 -- nvmf/common.sh@124 -- # set -e 00:12:13.137 13:41:15 -- nvmf/common.sh@125 -- # return 0 00:12:13.137 13:41:15 -- nvmf/common.sh@478 -- # '[' -n 2576591 ']' 00:12:13.137 13:41:15 -- nvmf/common.sh@479 -- # killprocess 2576591 00:12:13.137 13:41:15 -- common/autotest_common.sh@936 -- # '[' -z 2576591 ']' 00:12:13.137 13:41:15 -- common/autotest_common.sh@940 -- # kill -0 2576591 00:12:13.137 13:41:15 -- common/autotest_common.sh@941 -- # uname 00:12:13.137 13:41:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:13.137 13:41:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2576591 00:12:13.137 13:41:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:13.137 13:41:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:13.137 13:41:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2576591' 00:12:13.137 killing process with pid 2576591 00:12:13.137 13:41:15 -- common/autotest_common.sh@955 -- # kill 2576591 00:12:13.137 13:41:15 -- common/autotest_common.sh@960 -- # wait 2576591 00:12:13.395 13:41:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:13.395 13:41:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:13.395 13:41:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:13.395 13:41:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:13.395 13:41:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:13.395 13:41:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.395 13:41:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.395 13:41:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.299 13:41:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:15.299 00:12:15.299 real 0m7.200s 00:12:15.299 user 0m17.125s 00:12:15.299 sys 0m3.479s 00:12:15.299 13:41:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:15.299 13:41:18 -- common/autotest_common.sh@10 -- # set +x 00:12:15.299 ************************************ 00:12:15.299 END TEST nvmf_bdev_io_wait 00:12:15.299 ************************************ 00:12:15.557 13:41:18 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:15.557 13:41:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:15.557 13:41:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:15.557 13:41:18 -- common/autotest_common.sh@10 -- # set +x 00:12:15.557 ************************************ 00:12:15.557 START TEST nvmf_queue_depth 00:12:15.557 ************************************ 00:12:15.557 13:41:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:15.557 * Looking for test storage... 00:12:15.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.557 13:41:18 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.557 13:41:18 -- nvmf/common.sh@7 -- # uname -s 00:12:15.557 13:41:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.557 13:41:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.557 13:41:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.557 13:41:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.557 13:41:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.557 13:41:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.557 13:41:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.557 13:41:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.557 13:41:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.557 13:41:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.557 13:41:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:15.557 13:41:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:15.557 13:41:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.557 13:41:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.557 13:41:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.557 13:41:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.557 13:41:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.557 13:41:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.557 13:41:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.557 13:41:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.557 13:41:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.557 13:41:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.557 13:41:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.557 13:41:18 -- paths/export.sh@5 -- # export PATH 00:12:15.557 13:41:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.557 13:41:18 -- nvmf/common.sh@47 -- # : 0 00:12:15.557 13:41:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:15.557 13:41:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:15.557 13:41:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.557 13:41:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.557 13:41:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.557 13:41:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:15.557 13:41:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:15.557 13:41:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:15.557 13:41:18 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:15.557 13:41:18 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:15.557 13:41:18 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:15.557 13:41:18 -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:15.557 13:41:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:15.557 13:41:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.557 13:41:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:15.557 13:41:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:15.557 13:41:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:15.557 13:41:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.557 13:41:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.557 13:41:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.557 13:41:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:15.557 13:41:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:15.557 13:41:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:15.557 13:41:18 -- common/autotest_common.sh@10 -- # set +x 00:12:18.095 13:41:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:18.095 13:41:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:18.095 13:41:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:18.095 13:41:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:18.095 13:41:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:18.095 13:41:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:18.095 13:41:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:18.095 13:41:20 -- nvmf/common.sh@295 -- # net_devs=() 00:12:18.095 13:41:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:18.095 13:41:20 -- nvmf/common.sh@296 -- # e810=() 00:12:18.095 13:41:20 -- nvmf/common.sh@296 -- # local -ga e810 00:12:18.095 13:41:20 -- nvmf/common.sh@297 -- # x722=() 00:12:18.095 13:41:20 -- nvmf/common.sh@297 -- # local -ga x722 00:12:18.095 13:41:20 -- nvmf/common.sh@298 -- # mlx=() 00:12:18.095 13:41:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:18.095 13:41:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.095 13:41:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.095 13:41:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.095 13:41:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.095 13:41:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.095 13:41:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.095 13:41:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.095 13:41:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.095 13:41:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.095 13:41:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.095 13:41:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.095 13:41:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:18.095 13:41:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:18.095 13:41:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:18.095 13:41:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:18.095 13:41:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:18.095 13:41:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:18.095 13:41:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.095 13:41:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:18.095 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:18.095 13:41:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.095 13:41:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.095 13:41:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.095 13:41:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.095 13:41:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.095 13:41:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.095 13:41:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:18.095 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:18.095 13:41:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.095 13:41:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.095 13:41:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.095 13:41:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.095 13:41:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.095 13:41:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:18.095 13:41:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:18.095 13:41:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:18.095 13:41:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.095 13:41:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.095 13:41:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:18.095 13:41:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.095 13:41:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:18.095 Found net devices under 0000:84:00.0: cvl_0_0 00:12:18.095 13:41:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.096 13:41:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.096 13:41:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.096 13:41:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:18.096 13:41:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.096 13:41:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:18.096 Found net devices under 0000:84:00.1: cvl_0_1 00:12:18.096 13:41:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.096 13:41:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:18.096 13:41:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:18.096 13:41:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:18.096 13:41:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:18.096 13:41:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:18.096 13:41:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.096 13:41:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.096 13:41:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.096 13:41:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:18.096 13:41:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.096 13:41:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.096 13:41:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:18.096 13:41:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.096 13:41:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.096 13:41:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:18.096 13:41:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:18.096 13:41:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.096 13:41:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.096 13:41:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.096 13:41:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.096 13:41:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:18.096 13:41:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.096 13:41:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.096 13:41:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.096 13:41:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:18.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:12:18.096 00:12:18.096 --- 10.0.0.2 ping statistics --- 00:12:18.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.096 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:12:18.096 13:41:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:12:18.096 00:12:18.096 --- 10.0.0.1 ping statistics --- 00:12:18.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.096 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:12:18.096 13:41:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.096 13:41:20 -- nvmf/common.sh@411 -- # return 0 00:12:18.096 13:41:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:18.096 13:41:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.096 13:41:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:18.096 13:41:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:18.096 13:41:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.096 13:41:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:18.096 13:41:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:18.096 13:41:20 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:18.096 13:41:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:18.096 13:41:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:18.096 13:41:20 -- common/autotest_common.sh@10 -- # set +x 00:12:18.096 13:41:20 -- nvmf/common.sh@470 -- # nvmfpid=2578869 00:12:18.096 13:41:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:18.096 13:41:20 -- nvmf/common.sh@471 -- # waitforlisten 2578869 00:12:18.096 13:41:20 -- common/autotest_common.sh@817 -- # '[' -z 2578869 ']' 00:12:18.096 13:41:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.096 13:41:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:18.096 13:41:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.096 13:41:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:18.096 13:41:20 -- common/autotest_common.sh@10 -- # set +x 00:12:18.096 [2024-04-18 13:41:20.514636] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:12:18.096 [2024-04-18 13:41:20.514709] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.096 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.096 [2024-04-18 13:41:20.583364] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.096 [2024-04-18 13:41:20.693966] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.096 [2024-04-18 13:41:20.694031] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.096 [2024-04-18 13:41:20.694059] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.096 [2024-04-18 13:41:20.694071] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.096 [2024-04-18 13:41:20.694081] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.096 [2024-04-18 13:41:20.694115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.096 13:41:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:18.096 13:41:20 -- common/autotest_common.sh@850 -- # return 0 00:12:18.096 13:41:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:18.096 13:41:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:18.096 13:41:20 -- common/autotest_common.sh@10 -- # set +x 00:12:18.096 13:41:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.096 13:41:20 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:18.096 13:41:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.096 13:41:20 -- common/autotest_common.sh@10 -- # set +x 00:12:18.096 [2024-04-18 13:41:20.838793] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.096 13:41:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.096 13:41:20 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:18.096 13:41:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.096 13:41:20 -- common/autotest_common.sh@10 -- # set +x 00:12:18.096 Malloc0 00:12:18.096 13:41:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.096 13:41:20 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:18.096 13:41:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.096 13:41:20 -- common/autotest_common.sh@10 -- # set +x 00:12:18.096 13:41:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.096 13:41:20 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:18.096 13:41:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.096 13:41:20 -- common/autotest_common.sh@10 -- # set +x 00:12:18.096 13:41:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.096 13:41:20 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.096 13:41:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.096 13:41:20 -- common/autotest_common.sh@10 -- # set +x 00:12:18.406 [2024-04-18 13:41:20.897986] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.406 13:41:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.406 13:41:20 -- target/queue_depth.sh@30 -- # bdevperf_pid=2579007 00:12:18.406 13:41:20 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:18.406 13:41:20 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:18.406 13:41:20 -- target/queue_depth.sh@33 -- # waitforlisten 2579007 /var/tmp/bdevperf.sock 00:12:18.406 13:41:20 -- common/autotest_common.sh@817 -- # '[' -z 2579007 ']' 00:12:18.406 13:41:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:18.406 13:41:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:18.406 13:41:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:18.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:18.406 13:41:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:18.406 13:41:20 -- common/autotest_common.sh@10 -- # set +x 00:12:18.406 [2024-04-18 13:41:20.943116] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:12:18.406 [2024-04-18 13:41:20.943200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2579007 ] 00:12:18.406 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.406 [2024-04-18 13:41:21.002650] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.406 [2024-04-18 13:41:21.119919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.666 13:41:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:18.666 13:41:21 -- common/autotest_common.sh@850 -- # return 0 00:12:18.666 13:41:21 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:18.666 13:41:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.666 13:41:21 -- common/autotest_common.sh@10 -- # set +x 00:12:18.666 NVMe0n1 00:12:18.666 13:41:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.666 13:41:21 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:18.666 Running I/O for 10 seconds... 00:12:30.887 00:12:30.887 Latency(us) 00:12:30.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.887 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:30.887 Verification LBA range: start 0x0 length 0x4000 00:12:30.887 NVMe0n1 : 10.11 8279.07 32.34 0.00 0.00 123080.69 24078.41 91653.31 00:12:30.887 =================================================================================================================== 00:12:30.887 Total : 8279.07 32.34 0.00 0.00 123080.69 24078.41 91653.31 00:12:30.887 0 00:12:30.887 13:41:31 -- target/queue_depth.sh@39 -- # killprocess 2579007 00:12:30.887 13:41:31 -- common/autotest_common.sh@936 -- # '[' -z 2579007 ']' 00:12:30.887 13:41:31 -- common/autotest_common.sh@940 -- # kill -0 2579007 00:12:30.887 13:41:31 -- common/autotest_common.sh@941 -- # uname 00:12:30.887 13:41:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.887 13:41:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2579007 00:12:30.887 13:41:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:30.887 13:41:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:30.887 13:41:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2579007' 00:12:30.887 killing process with pid 2579007 00:12:30.887 13:41:31 -- common/autotest_common.sh@955 -- # kill 2579007 00:12:30.887 Received shutdown signal, test time was about 10.000000 seconds 00:12:30.887 00:12:30.887 Latency(us) 00:12:30.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.887 =================================================================================================================== 00:12:30.887 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:30.887 13:41:31 -- common/autotest_common.sh@960 -- # wait 2579007 00:12:30.887 13:41:31 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:30.887 13:41:31 -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:30.887 13:41:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:30.887 13:41:31 -- nvmf/common.sh@117 -- # sync 00:12:30.887 13:41:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.887 13:41:31 -- nvmf/common.sh@120 -- # set +e 00:12:30.887 13:41:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.887 13:41:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.887 rmmod nvme_tcp 00:12:30.887 rmmod nvme_fabrics 00:12:30.887 rmmod nvme_keyring 00:12:30.887 13:41:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.887 13:41:31 -- nvmf/common.sh@124 -- # set -e 00:12:30.887 13:41:31 -- nvmf/common.sh@125 -- # return 0 00:12:30.887 13:41:31 -- nvmf/common.sh@478 -- # '[' -n 2578869 ']' 00:12:30.887 13:41:31 -- nvmf/common.sh@479 -- # killprocess 2578869 00:12:30.887 13:41:31 -- common/autotest_common.sh@936 -- # '[' -z 2578869 ']' 00:12:30.887 13:41:31 -- common/autotest_common.sh@940 -- # kill -0 2578869 00:12:30.887 13:41:31 -- common/autotest_common.sh@941 -- # uname 00:12:30.887 13:41:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.887 13:41:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2578869 00:12:30.887 13:41:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:30.887 13:41:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:30.887 13:41:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2578869' 00:12:30.887 killing process with pid 2578869 00:12:30.887 13:41:31 -- common/autotest_common.sh@955 -- # kill 2578869 00:12:30.887 13:41:31 -- common/autotest_common.sh@960 -- # wait 2578869 00:12:30.887 13:41:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:30.887 13:41:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:30.887 13:41:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:30.887 13:41:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.887 13:41:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:30.887 13:41:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.887 13:41:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.887 13:41:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.824 13:41:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:31.824 00:12:31.824 real 0m16.106s 00:12:31.824 user 0m22.406s 00:12:31.824 sys 0m3.219s 00:12:31.824 13:41:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:31.824 13:41:34 -- common/autotest_common.sh@10 -- # set +x 00:12:31.824 ************************************ 00:12:31.824 END TEST nvmf_queue_depth 00:12:31.824 ************************************ 00:12:31.824 13:41:34 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:31.824 13:41:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.824 13:41:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.824 13:41:34 -- common/autotest_common.sh@10 -- # set +x 00:12:31.824 ************************************ 00:12:31.824 START TEST nvmf_multipath 00:12:31.824 ************************************ 00:12:31.824 13:41:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:31.824 * Looking for test storage... 00:12:31.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.824 13:41:34 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.824 13:41:34 -- nvmf/common.sh@7 -- # uname -s 00:12:31.824 13:41:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.824 13:41:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.824 13:41:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.824 13:41:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.824 13:41:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.824 13:41:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.824 13:41:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.824 13:41:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.824 13:41:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.824 13:41:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.824 13:41:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:31.824 13:41:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:31.824 13:41:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.824 13:41:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.824 13:41:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.824 13:41:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.824 13:41:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.824 13:41:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.824 13:41:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.824 13:41:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.824 13:41:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.824 13:41:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.824 13:41:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.824 13:41:34 -- paths/export.sh@5 -- # export PATH 00:12:31.824 13:41:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.824 13:41:34 -- nvmf/common.sh@47 -- # : 0 00:12:31.824 13:41:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.824 13:41:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.824 13:41:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.824 13:41:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.824 13:41:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.824 13:41:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.824 13:41:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.824 13:41:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.824 13:41:34 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:31.824 13:41:34 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:31.824 13:41:34 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:31.824 13:41:34 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:31.824 13:41:34 -- target/multipath.sh@43 -- # nvmftestinit 00:12:31.824 13:41:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:31.824 13:41:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.824 13:41:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:31.824 13:41:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:31.824 13:41:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:31.824 13:41:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.824 13:41:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.824 13:41:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.824 13:41:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:31.824 13:41:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:31.824 13:41:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:31.824 13:41:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.726 13:41:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:33.726 13:41:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:33.726 13:41:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:33.726 13:41:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:33.726 13:41:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:33.726 13:41:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:33.726 13:41:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:33.726 13:41:36 -- nvmf/common.sh@295 -- # net_devs=() 00:12:33.726 13:41:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:33.726 13:41:36 -- nvmf/common.sh@296 -- # e810=() 00:12:33.726 13:41:36 -- nvmf/common.sh@296 -- # local -ga e810 00:12:33.726 13:41:36 -- nvmf/common.sh@297 -- # x722=() 00:12:33.726 13:41:36 -- nvmf/common.sh@297 -- # local -ga x722 00:12:33.726 13:41:36 -- nvmf/common.sh@298 -- # mlx=() 00:12:33.726 13:41:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:33.726 13:41:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.726 13:41:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.726 13:41:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.726 13:41:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.726 13:41:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.726 13:41:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.726 13:41:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.726 13:41:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.726 13:41:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.726 13:41:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.726 13:41:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.726 13:41:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:33.726 13:41:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:33.726 13:41:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:33.726 13:41:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:33.726 13:41:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:33.726 13:41:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:33.726 13:41:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.726 13:41:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:33.726 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:33.726 13:41:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.726 13:41:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.726 13:41:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.726 13:41:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.726 13:41:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.726 13:41:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.726 13:41:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:33.726 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:33.726 13:41:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.726 13:41:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.726 13:41:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.726 13:41:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.727 13:41:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.727 13:41:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:33.727 13:41:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:33.727 13:41:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:33.727 13:41:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.727 13:41:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.727 13:41:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:33.727 13:41:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.727 13:41:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:33.727 Found net devices under 0000:84:00.0: cvl_0_0 00:12:33.727 13:41:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.727 13:41:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.727 13:41:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.727 13:41:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:33.727 13:41:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.727 13:41:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:33.727 Found net devices under 0000:84:00.1: cvl_0_1 00:12:33.727 13:41:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.727 13:41:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:33.727 13:41:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:33.727 13:41:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:33.727 13:41:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:33.727 13:41:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:33.727 13:41:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.727 13:41:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.727 13:41:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.727 13:41:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:33.727 13:41:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.727 13:41:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.727 13:41:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:33.727 13:41:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.727 13:41:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.727 13:41:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:33.727 13:41:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:33.727 13:41:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.727 13:41:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.727 13:41:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.727 13:41:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.727 13:41:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:33.727 13:41:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.986 13:41:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.986 13:41:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.986 13:41:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:33.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:12:33.986 00:12:33.986 --- 10.0.0.2 ping statistics --- 00:12:33.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.986 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:12:33.986 13:41:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:12:33.986 00:12:33.986 --- 10.0.0.1 ping statistics --- 00:12:33.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.986 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:12:33.986 13:41:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.986 13:41:36 -- nvmf/common.sh@411 -- # return 0 00:12:33.986 13:41:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:33.986 13:41:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.986 13:41:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:33.986 13:41:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:33.986 13:41:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.986 13:41:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:33.986 13:41:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:33.986 13:41:36 -- target/multipath.sh@45 -- # '[' -z ']' 00:12:33.986 13:41:36 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:33.986 only one NIC for nvmf test 00:12:33.986 13:41:36 -- target/multipath.sh@47 -- # nvmftestfini 00:12:33.986 13:41:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:33.986 13:41:36 -- nvmf/common.sh@117 -- # sync 00:12:33.986 13:41:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:33.986 13:41:36 -- nvmf/common.sh@120 -- # set +e 00:12:33.986 13:41:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:33.986 13:41:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:33.986 rmmod nvme_tcp 00:12:33.986 rmmod nvme_fabrics 00:12:33.986 rmmod nvme_keyring 00:12:33.986 13:41:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:33.986 13:41:36 -- nvmf/common.sh@124 -- # set -e 00:12:33.986 13:41:36 -- nvmf/common.sh@125 -- # return 0 00:12:33.986 13:41:36 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:12:33.986 13:41:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:33.986 13:41:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:33.986 13:41:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:33.986 13:41:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.986 13:41:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:33.986 13:41:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.986 13:41:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.986 13:41:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.891 13:41:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:35.891 13:41:38 -- target/multipath.sh@48 -- # exit 0 00:12:35.891 13:41:38 -- target/multipath.sh@1 -- # nvmftestfini 00:12:35.891 13:41:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:35.891 13:41:38 -- nvmf/common.sh@117 -- # sync 00:12:35.891 13:41:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:35.891 13:41:38 -- nvmf/common.sh@120 -- # set +e 00:12:35.891 13:41:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.891 13:41:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:35.891 13:41:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:35.891 13:41:38 -- nvmf/common.sh@124 -- # set -e 00:12:35.891 13:41:38 -- nvmf/common.sh@125 -- # return 0 00:12:35.891 13:41:38 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:12:35.891 13:41:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:35.891 13:41:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:35.891 13:41:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:35.891 13:41:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.891 13:41:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.891 13:41:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.891 13:41:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.891 13:41:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.891 13:41:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:35.891 00:12:35.891 real 0m4.266s 00:12:35.891 user 0m0.754s 00:12:35.891 sys 0m1.502s 00:12:35.891 13:41:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:35.891 13:41:38 -- common/autotest_common.sh@10 -- # set +x 00:12:35.891 ************************************ 00:12:35.891 END TEST nvmf_multipath 00:12:35.891 ************************************ 00:12:36.148 13:41:38 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:36.148 13:41:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:36.148 13:41:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:36.148 13:41:38 -- common/autotest_common.sh@10 -- # set +x 00:12:36.148 ************************************ 00:12:36.148 START TEST nvmf_zcopy 00:12:36.148 ************************************ 00:12:36.148 13:41:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:36.148 * Looking for test storage... 00:12:36.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.148 13:41:38 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.148 13:41:38 -- nvmf/common.sh@7 -- # uname -s 00:12:36.148 13:41:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.148 13:41:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.148 13:41:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.148 13:41:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.148 13:41:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.148 13:41:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.148 13:41:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.148 13:41:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.148 13:41:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.148 13:41:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.148 13:41:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:36.148 13:41:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:36.148 13:41:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.148 13:41:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.148 13:41:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.148 13:41:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.148 13:41:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.148 13:41:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.148 13:41:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.148 13:41:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.148 13:41:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.148 13:41:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.148 13:41:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.148 13:41:38 -- paths/export.sh@5 -- # export PATH 00:12:36.148 13:41:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.148 13:41:38 -- nvmf/common.sh@47 -- # : 0 00:12:36.148 13:41:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.148 13:41:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.148 13:41:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.148 13:41:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.148 13:41:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.148 13:41:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.148 13:41:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.148 13:41:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.148 13:41:38 -- target/zcopy.sh@12 -- # nvmftestinit 00:12:36.148 13:41:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:36.148 13:41:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.148 13:41:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:36.149 13:41:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:36.149 13:41:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:36.149 13:41:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.149 13:41:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.149 13:41:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.149 13:41:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:36.149 13:41:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:36.149 13:41:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:36.149 13:41:38 -- common/autotest_common.sh@10 -- # set +x 00:12:38.679 13:41:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:38.679 13:41:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.679 13:41:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.679 13:41:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.679 13:41:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.679 13:41:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.679 13:41:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.679 13:41:40 -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.679 13:41:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.679 13:41:40 -- nvmf/common.sh@296 -- # e810=() 00:12:38.679 13:41:40 -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.679 13:41:40 -- nvmf/common.sh@297 -- # x722=() 00:12:38.679 13:41:40 -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.679 13:41:40 -- nvmf/common.sh@298 -- # mlx=() 00:12:38.679 13:41:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.679 13:41:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.679 13:41:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.679 13:41:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.679 13:41:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.679 13:41:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.679 13:41:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.679 13:41:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.679 13:41:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.679 13:41:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.679 13:41:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.679 13:41:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.679 13:41:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.679 13:41:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:38.679 13:41:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.679 13:41:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.679 13:41:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:38.679 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:38.679 13:41:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.679 13:41:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:38.679 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:38.679 13:41:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.679 13:41:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.679 13:41:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.679 13:41:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:38.679 13:41:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.679 13:41:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:38.679 Found net devices under 0000:84:00.0: cvl_0_0 00:12:38.679 13:41:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.679 13:41:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.679 13:41:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.679 13:41:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:38.679 13:41:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.679 13:41:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:38.679 Found net devices under 0000:84:00.1: cvl_0_1 00:12:38.679 13:41:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.679 13:41:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:38.679 13:41:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:38.679 13:41:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:38.679 13:41:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:38.679 13:41:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.679 13:41:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.679 13:41:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.679 13:41:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:38.679 13:41:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.679 13:41:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.679 13:41:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:38.679 13:41:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.679 13:41:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.679 13:41:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:38.679 13:41:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:38.679 13:41:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.679 13:41:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.679 13:41:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.679 13:41:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.679 13:41:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.679 13:41:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.679 13:41:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.679 13:41:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.679 13:41:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:12:38.679 00:12:38.679 --- 10.0.0.2 ping statistics --- 00:12:38.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.679 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:12:38.679 13:41:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:12:38.679 00:12:38.679 --- 10.0.0.1 ping statistics --- 00:12:38.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.679 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:12:38.679 13:41:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.679 13:41:41 -- nvmf/common.sh@411 -- # return 0 00:12:38.679 13:41:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:38.679 13:41:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.679 13:41:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:38.679 13:41:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:38.679 13:41:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.679 13:41:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:38.679 13:41:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:38.679 13:41:41 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:38.679 13:41:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:38.679 13:41:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:38.679 13:41:41 -- common/autotest_common.sh@10 -- # set +x 00:12:38.679 13:41:41 -- nvmf/common.sh@470 -- # nvmfpid=2584181 00:12:38.679 13:41:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:38.679 13:41:41 -- nvmf/common.sh@471 -- # waitforlisten 2584181 00:12:38.679 13:41:41 -- common/autotest_common.sh@817 -- # '[' -z 2584181 ']' 00:12:38.679 13:41:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.679 13:41:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:38.679 13:41:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.679 13:41:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:38.679 13:41:41 -- common/autotest_common.sh@10 -- # set +x 00:12:38.679 [2024-04-18 13:41:41.089677] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:12:38.679 [2024-04-18 13:41:41.089778] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.679 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.679 [2024-04-18 13:41:41.155350] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.679 [2024-04-18 13:41:41.264038] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.680 [2024-04-18 13:41:41.264093] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.680 [2024-04-18 13:41:41.264120] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.680 [2024-04-18 13:41:41.264132] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.680 [2024-04-18 13:41:41.264141] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.680 [2024-04-18 13:41:41.264190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.680 13:41:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:38.680 13:41:41 -- common/autotest_common.sh@850 -- # return 0 00:12:38.680 13:41:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:38.680 13:41:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:38.680 13:41:41 -- common/autotest_common.sh@10 -- # set +x 00:12:38.680 13:41:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.680 13:41:41 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:38.680 13:41:41 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:38.680 13:41:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.680 13:41:41 -- common/autotest_common.sh@10 -- # set +x 00:12:38.680 [2024-04-18 13:41:41.416556] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.680 13:41:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.680 13:41:41 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:38.680 13:41:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.680 13:41:41 -- common/autotest_common.sh@10 -- # set +x 00:12:38.680 13:41:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.680 13:41:41 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.680 13:41:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.680 13:41:41 -- common/autotest_common.sh@10 -- # set +x 00:12:38.680 [2024-04-18 13:41:41.432796] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.680 13:41:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.680 13:41:41 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:38.680 13:41:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.680 13:41:41 -- common/autotest_common.sh@10 -- # set +x 00:12:38.680 13:41:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.680 13:41:41 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:38.680 13:41:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.680 13:41:41 -- common/autotest_common.sh@10 -- # set +x 00:12:38.680 malloc0 00:12:38.680 13:41:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.680 13:41:41 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:38.680 13:41:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.680 13:41:41 -- common/autotest_common.sh@10 -- # set +x 00:12:38.680 13:41:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.680 13:41:41 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:38.680 13:41:41 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:38.680 13:41:41 -- nvmf/common.sh@521 -- # config=() 00:12:38.680 13:41:41 -- nvmf/common.sh@521 -- # local subsystem config 00:12:38.680 13:41:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:38.680 13:41:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:38.680 { 00:12:38.680 "params": { 00:12:38.680 "name": "Nvme$subsystem", 00:12:38.680 "trtype": "$TEST_TRANSPORT", 00:12:38.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:38.680 "adrfam": "ipv4", 00:12:38.680 "trsvcid": "$NVMF_PORT", 00:12:38.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:38.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:38.680 "hdgst": ${hdgst:-false}, 00:12:38.680 "ddgst": ${ddgst:-false} 00:12:38.680 }, 00:12:38.680 "method": "bdev_nvme_attach_controller" 00:12:38.680 } 00:12:38.680 EOF 00:12:38.680 )") 00:12:38.680 13:41:41 -- nvmf/common.sh@543 -- # cat 00:12:38.680 13:41:41 -- nvmf/common.sh@545 -- # jq . 00:12:38.680 13:41:41 -- nvmf/common.sh@546 -- # IFS=, 00:12:38.680 13:41:41 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:38.680 "params": { 00:12:38.680 "name": "Nvme1", 00:12:38.680 "trtype": "tcp", 00:12:38.680 "traddr": "10.0.0.2", 00:12:38.680 "adrfam": "ipv4", 00:12:38.680 "trsvcid": "4420", 00:12:38.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:38.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:38.680 "hdgst": false, 00:12:38.680 "ddgst": false 00:12:38.680 }, 00:12:38.680 "method": "bdev_nvme_attach_controller" 00:12:38.680 }' 00:12:38.939 [2024-04-18 13:41:41.515863] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:12:38.939 [2024-04-18 13:41:41.515945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2584259 ] 00:12:38.939 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.939 [2024-04-18 13:41:41.584800] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.939 [2024-04-18 13:41:41.704465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.197 Running I/O for 10 seconds... 00:12:49.169 00:12:49.169 Latency(us) 00:12:49.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.169 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:49.169 Verification LBA range: start 0x0 length 0x1000 00:12:49.169 Nvme1n1 : 10.01 5456.84 42.63 0.00 0.00 23393.01 497.59 33981.63 00:12:49.169 =================================================================================================================== 00:12:49.169 Total : 5456.84 42.63 0.00 0.00 23393.01 497.59 33981.63 00:12:49.757 13:41:52 -- target/zcopy.sh@39 -- # perfpid=2585452 00:12:49.757 13:41:52 -- target/zcopy.sh@41 -- # xtrace_disable 00:12:49.757 13:41:52 -- common/autotest_common.sh@10 -- # set +x 00:12:49.757 13:41:52 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:49.757 13:41:52 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:49.757 13:41:52 -- nvmf/common.sh@521 -- # config=() 00:12:49.757 13:41:52 -- nvmf/common.sh@521 -- # local subsystem config 00:12:49.757 13:41:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:49.757 13:41:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:49.757 { 00:12:49.757 "params": { 00:12:49.757 "name": "Nvme$subsystem", 00:12:49.757 "trtype": "$TEST_TRANSPORT", 00:12:49.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:49.757 "adrfam": "ipv4", 00:12:49.757 "trsvcid": "$NVMF_PORT", 00:12:49.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:49.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:49.757 "hdgst": ${hdgst:-false}, 00:12:49.757 "ddgst": ${ddgst:-false} 00:12:49.757 }, 00:12:49.757 "method": "bdev_nvme_attach_controller" 00:12:49.757 } 00:12:49.757 EOF 00:12:49.757 )") 00:12:49.757 [2024-04-18 13:41:52.259298] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.757 [2024-04-18 13:41:52.259341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.757 13:41:52 -- nvmf/common.sh@543 -- # cat 00:12:49.757 13:41:52 -- nvmf/common.sh@545 -- # jq . 00:12:49.757 13:41:52 -- nvmf/common.sh@546 -- # IFS=, 00:12:49.757 13:41:52 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:49.757 "params": { 00:12:49.757 "name": "Nvme1", 00:12:49.757 "trtype": "tcp", 00:12:49.757 "traddr": "10.0.0.2", 00:12:49.757 "adrfam": "ipv4", 00:12:49.757 "trsvcid": "4420", 00:12:49.757 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:49.757 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:49.757 "hdgst": false, 00:12:49.757 "ddgst": false 00:12:49.757 }, 00:12:49.757 "method": "bdev_nvme_attach_controller" 00:12:49.757 }' 00:12:49.757 [2024-04-18 13:41:52.267269] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.757 [2024-04-18 13:41:52.267293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.757 [2024-04-18 13:41:52.275289] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.757 [2024-04-18 13:41:52.275312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.757 [2024-04-18 13:41:52.283305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.757 [2024-04-18 13:41:52.283328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.757 [2024-04-18 13:41:52.291325] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.757 [2024-04-18 13:41:52.291347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.757 [2024-04-18 13:41:52.299347] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.757 [2024-04-18 13:41:52.299369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.757 [2024-04-18 13:41:52.300452] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:12:49.757 [2024-04-18 13:41:52.300558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2585452 ] 00:12:49.757 [2024-04-18 13:41:52.307368] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.757 [2024-04-18 13:41:52.307391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.757 [2024-04-18 13:41:52.315389] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.757 [2024-04-18 13:41:52.315411] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.757 [2024-04-18 13:41:52.323410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.757 [2024-04-18 13:41:52.323432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.757 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.757 [2024-04-18 13:41:52.331433] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.757 [2024-04-18 13:41:52.331470] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.757 [2024-04-18 13:41:52.339473] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.757 [2024-04-18 13:41:52.339501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.757 [2024-04-18 13:41:52.347493] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.757 [2024-04-18 13:41:52.347513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.757 [2024-04-18 13:41:52.355513] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.757 [2024-04-18 13:41:52.355554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.363549] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.363574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.366427] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.758 [2024-04-18 13:41:52.371582] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.371604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.379628] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.379665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.387632] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.387661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.395637] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.395663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.403661] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.403686] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.411682] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.411707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.419705] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.419729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.427727] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.427751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.435773] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.435808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.443781] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.443809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.451795] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.451819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.459816] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.459840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.467838] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.467862] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.475861] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.475885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.483884] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.483908] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.486724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.758 [2024-04-18 13:41:52.491905] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.491929] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.499927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.499952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.507971] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.508005] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.515992] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.516026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.524018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.524053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.532037] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.532073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.540058] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.540095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.548083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.548119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.758 [2024-04-18 13:41:52.556096] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.758 [2024-04-18 13:41:52.556128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.564108] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.564134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.572148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.572191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.580169] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.580209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.588184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.588223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.596221] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.596244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.604242] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.604270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.612271] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.612295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.620283] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.620308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.628303] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.628326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.636325] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.636348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.644347] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.644370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.652371] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.652394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.660408] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.660430] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.668413] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.668433] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.676437] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.676475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 Running I/O for 5 seconds... 00:12:50.018 [2024-04-18 13:41:52.684472] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.684497] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.697479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.697503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.707976] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.708007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.721172] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.721224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.733315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.733345] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.745592] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.745623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.758435] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.758475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.770887] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.770918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.782946] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.018 [2024-04-18 13:41:52.782977] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.018 [2024-04-18 13:41:52.795111] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.019 [2024-04-18 13:41:52.795150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.019 [2024-04-18 13:41:52.806979] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.019 [2024-04-18 13:41:52.807010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.019 [2024-04-18 13:41:52.818757] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.019 [2024-04-18 13:41:52.818787] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:52.831607] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:52.831638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:52.843506] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:52.843548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:52.855595] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:52.855626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:52.867792] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:52.867822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:52.879899] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:52.879929] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:52.891852] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:52.891882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:52.904017] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:52.904047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:52.916168] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:52.916222] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:52.928755] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:52.928785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:52.940947] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:52.940977] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:52.952970] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:52.953000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:52.965108] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:52.965139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:52.977755] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:52.977786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:52.989859] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:52.989889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:53.002351] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:53.002377] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:53.014568] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:53.014599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:53.026882] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:53.026922] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:53.039159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:53.039198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:53.051003] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:53.051033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:53.062720] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:53.062751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.278 [2024-04-18 13:41:53.074942] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.278 [2024-04-18 13:41:53.074973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.087520] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.087552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.099377] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.099403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.111695] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.111726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.124510] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.124553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.136950] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.136980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.149351] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.149378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.161607] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.161638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.173742] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.173773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.186143] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.186174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.198016] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.198046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.210070] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.210100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.222372] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.222399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.235016] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.235047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.246909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.246940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.259282] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.259308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.271739] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.271769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.283748] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.283778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.296021] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.296052] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.309049] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.309080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.321808] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.321839] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.334166] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.334205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.545 [2024-04-18 13:41:53.346281] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.545 [2024-04-18 13:41:53.346322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.358113] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.358144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.370492] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.370519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.382844] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.382875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.394839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.394870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.406795] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.406827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.418393] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.418421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.430501] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.430543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.442634] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.442666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.454536] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.454567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.466474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.466505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.478414] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.478441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.490779] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.490810] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.502686] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.502718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.516379] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.516406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.526725] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.526755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.539493] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.539519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.551910] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.551941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.563705] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.563736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.575843] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.575874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.588257] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.588284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.804 [2024-04-18 13:41:53.600575] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.804 [2024-04-18 13:41:53.600607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.063 [2024-04-18 13:41:53.612359] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.063 [2024-04-18 13:41:53.612386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.063 [2024-04-18 13:41:53.624019] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.063 [2024-04-18 13:41:53.624049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.063 [2024-04-18 13:41:53.636109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.063 [2024-04-18 13:41:53.636139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.063 [2024-04-18 13:41:53.648002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.063 [2024-04-18 13:41:53.648032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.063 [2024-04-18 13:41:53.659906] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.063 [2024-04-18 13:41:53.659936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.063 [2024-04-18 13:41:53.671728] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.063 [2024-04-18 13:41:53.671759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.063 [2024-04-18 13:41:53.683911] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.063 [2024-04-18 13:41:53.683941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.064 [2024-04-18 13:41:53.695555] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.064 [2024-04-18 13:41:53.695586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.064 [2024-04-18 13:41:53.707549] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.064 [2024-04-18 13:41:53.707579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.064 [2024-04-18 13:41:53.719696] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.064 [2024-04-18 13:41:53.719726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.064 [2024-04-18 13:41:53.731790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.064 [2024-04-18 13:41:53.731820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.064 [2024-04-18 13:41:53.743636] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.064 [2024-04-18 13:41:53.743666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.064 [2024-04-18 13:41:53.755814] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.064 [2024-04-18 13:41:53.755844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.064 [2024-04-18 13:41:53.768774] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.064 [2024-04-18 13:41:53.768805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.064 [2024-04-18 13:41:53.781005] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.064 [2024-04-18 13:41:53.781035] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.064 [2024-04-18 13:41:53.792969] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.064 [2024-04-18 13:41:53.793000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.064 [2024-04-18 13:41:53.805139] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.064 [2024-04-18 13:41:53.805170] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.064 [2024-04-18 13:41:53.817669] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.064 [2024-04-18 13:41:53.817699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.064 [2024-04-18 13:41:53.829639] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.064 [2024-04-18 13:41:53.829670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.064 [2024-04-18 13:41:53.841858] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.064 [2024-04-18 13:41:53.841889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.064 [2024-04-18 13:41:53.853981] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.064 [2024-04-18 13:41:53.854012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.064 [2024-04-18 13:41:53.865730] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.064 [2024-04-18 13:41:53.865761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:53.877616] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:53.877648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:53.889753] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:53.889784] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:53.902058] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:53.902088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:53.914495] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:53.914536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:53.926514] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:53.926558] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:53.938656] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:53.938687] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:53.950818] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:53.950849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:53.962698] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:53.962729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:53.974773] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:53.974804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:53.986667] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:53.986697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:53.998839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:53.998869] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:54.011015] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:54.011046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:54.023167] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:54.023221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:54.035728] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:54.035759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:54.046402] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:54.046428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:54.057445] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:54.057485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:54.069369] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:54.069396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:54.081575] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:54.081605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:54.093864] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:54.093894] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:54.106084] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:54.106115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:54.117791] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:54.117822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.324 [2024-04-18 13:41:54.129848] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.324 [2024-04-18 13:41:54.129879] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.584 [2024-04-18 13:41:54.141787] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.584 [2024-04-18 13:41:54.141818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.584 [2024-04-18 13:41:54.154093] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.584 [2024-04-18 13:41:54.154124] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.584 [2024-04-18 13:41:54.166876] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.584 [2024-04-18 13:41:54.166918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.584 [2024-04-18 13:41:54.178596] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.584 [2024-04-18 13:41:54.178626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.584 [2024-04-18 13:41:54.190424] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.584 [2024-04-18 13:41:54.190469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.584 [2024-04-18 13:41:54.202556] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.584 [2024-04-18 13:41:54.202586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.584 [2024-04-18 13:41:54.214191] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.584 [2024-04-18 13:41:54.214234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.584 [2024-04-18 13:41:54.225594] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.584 [2024-04-18 13:41:54.225625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.584 [2024-04-18 13:41:54.238501] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.584 [2024-04-18 13:41:54.238544] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.584 [2024-04-18 13:41:54.250390] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.584 [2024-04-18 13:41:54.250416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.584 [2024-04-18 13:41:54.262634] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.584 [2024-04-18 13:41:54.262665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.584 [2024-04-18 13:41:54.274737] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.584 [2024-04-18 13:41:54.274767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.584 [2024-04-18 13:41:54.286722] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.584 [2024-04-18 13:41:54.286753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.584 [2024-04-18 13:41:54.298631] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.584 [2024-04-18 13:41:54.298661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.584 [2024-04-18 13:41:54.310899] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.584 [2024-04-18 13:41:54.310929] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.584 [2024-04-18 13:41:54.322865] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.584 [2024-04-18 13:41:54.322895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.585 [2024-04-18 13:41:54.334804] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.585 [2024-04-18 13:41:54.334834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.585 [2024-04-18 13:41:54.347257] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.585 [2024-04-18 13:41:54.347283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.585 [2024-04-18 13:41:54.359628] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.585 [2024-04-18 13:41:54.359659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.585 [2024-04-18 13:41:54.371440] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.585 [2024-04-18 13:41:54.371482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.585 [2024-04-18 13:41:54.383808] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.585 [2024-04-18 13:41:54.383838] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.843 [2024-04-18 13:41:54.395897] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.843 [2024-04-18 13:41:54.395939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.843 [2024-04-18 13:41:54.408250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.843 [2024-04-18 13:41:54.408277] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.843 [2024-04-18 13:41:54.420324] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.420352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.432500] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.432544] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.444373] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.444400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.456568] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.456599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.468939] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.468970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.480999] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.481029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.492600] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.492631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.504636] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.504667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.518313] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.518340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.529149] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.529190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.541410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.541438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.553469] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.553495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.565260] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.565288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.579558] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.579589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.591251] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.591279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.603038] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.603069] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.614685] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.614716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.626795] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.626833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.638745] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.638772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.844 [2024-04-18 13:41:54.649713] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.844 [2024-04-18 13:41:54.649739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.102 [2024-04-18 13:41:54.661442] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.102 [2024-04-18 13:41:54.661484] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.102 [2024-04-18 13:41:54.673922] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.102 [2024-04-18 13:41:54.673953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.102 [2024-04-18 13:41:54.686404] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.102 [2024-04-18 13:41:54.686432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.102 [2024-04-18 13:41:54.698372] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.102 [2024-04-18 13:41:54.698399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.102 [2024-04-18 13:41:54.710490] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.102 [2024-04-18 13:41:54.710515] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.102 [2024-04-18 13:41:54.722374] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.102 [2024-04-18 13:41:54.722401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.102 [2024-04-18 13:41:54.734833] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.102 [2024-04-18 13:41:54.734869] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.102 [2024-04-18 13:41:54.746594] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.102 [2024-04-18 13:41:54.746625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.102 [2024-04-18 13:41:54.758341] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.102 [2024-04-18 13:41:54.758371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.102 [2024-04-18 13:41:54.770406] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.102 [2024-04-18 13:41:54.770444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.102 [2024-04-18 13:41:54.782242] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.102 [2024-04-18 13:41:54.782268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.102 [2024-04-18 13:41:54.793851] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.102 [2024-04-18 13:41:54.793881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.102 [2024-04-18 13:41:54.805516] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.102 [2024-04-18 13:41:54.805561] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.102 [2024-04-18 13:41:54.819063] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.102 [2024-04-18 13:41:54.819103] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.102 [2024-04-18 13:41:54.829563] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.102 [2024-04-18 13:41:54.829594] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.103 [2024-04-18 13:41:54.842095] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.103 [2024-04-18 13:41:54.842126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.103 [2024-04-18 13:41:54.854240] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.103 [2024-04-18 13:41:54.854273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.103 [2024-04-18 13:41:54.866879] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.103 [2024-04-18 13:41:54.866910] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.103 [2024-04-18 13:41:54.878812] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.103 [2024-04-18 13:41:54.878843] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.103 [2024-04-18 13:41:54.891110] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.103 [2024-04-18 13:41:54.891152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.103 [2024-04-18 13:41:54.903047] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.103 [2024-04-18 13:41:54.903077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:54.914975] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:54.915016] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:54.927091] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:54.927122] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:54.939162] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:54.939222] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:54.950660] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:54.950690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:54.962239] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:54.962265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:54.974057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:54.974087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:54.985857] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:54.985888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:54.998097] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:54.998128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:55.010016] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:55.010046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:55.022045] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:55.022079] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:55.034291] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:55.034317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:55.046187] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:55.046217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:55.058047] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:55.058077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:55.070088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:55.070119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:55.081692] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:55.081723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:55.094299] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:55.094326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:55.106609] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:55.106640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:55.119070] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:55.119100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:55.131226] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:55.131268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:55.143745] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:55.143776] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:55.156051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:55.156081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.362 [2024-04-18 13:41:55.168372] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.362 [2024-04-18 13:41:55.168400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.180895] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.180927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.193165] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.193204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.205690] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.205720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.218088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.218118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.229772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.229803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.241533] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.241565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.253240] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.253266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.265060] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.265090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.277319] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.277345] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.289927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.289957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.302538] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.302569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.314731] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.314762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.326893] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.326924] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.339174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.339214] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.350973] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.351003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.363105] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.363136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.375029] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.375061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.386900] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.386931] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.399235] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.399262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.411321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.411348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.622 [2024-04-18 13:41:55.423405] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.622 [2024-04-18 13:41:55.423432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.435642] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.435673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.447728] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.447758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.459743] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.459774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.471758] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.471788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.484107] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.484138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.496306] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.496333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.508566] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.508597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.520480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.520506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.532305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.532331] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.544586] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.544617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.556586] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.556618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.569143] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.569173] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.581080] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.581112] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.593088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.593119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.606958] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.606988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.618617] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.618647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.630488] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.630520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.642255] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.642282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.654698] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.654729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.669052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.669078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.881 [2024-04-18 13:41:55.679410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.881 [2024-04-18 13:41:55.679436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.692086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.692117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.704316] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.704343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.716385] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.716413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.728435] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.728476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.740256] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.740282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.752168] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.752221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.763775] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.763806] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.775821] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.775852] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.787651] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.787682] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.799684] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.799715] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.811499] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.811524] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.825445] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.825485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.836797] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.836828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.848767] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.848805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.861333] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.861359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.873514] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.873558] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.885798] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.885828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.897517] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.897542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.909677] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.909709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.921741] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.921772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.934055] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.934086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.946200] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.946242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.958589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.958624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.970561] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.970593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.982881] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.982912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:55.995225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:55.995265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:56.007235] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:56.007262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:56.019192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:56.019234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:56.031089] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:56.031119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:56.043376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:56.043402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:56.055680] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:56.055710] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:56.067971] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:56.068001] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:56.080226] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:56.080254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:56.092284] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:56.092311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:56.104515] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:56.104559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:56.116533] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:56.116564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.328 [2024-04-18 13:41:56.129258] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.328 [2024-04-18 13:41:56.129290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.585 [2024-04-18 13:41:56.140819] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.585 [2024-04-18 13:41:56.140850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.585 [2024-04-18 13:41:56.152512] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.585 [2024-04-18 13:41:56.152542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.585 [2024-04-18 13:41:56.163886] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.585 [2024-04-18 13:41:56.163916] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.585 [2024-04-18 13:41:56.175557] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.585 [2024-04-18 13:41:56.175588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.585 [2024-04-18 13:41:56.187479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.585 [2024-04-18 13:41:56.187504] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.585 [2024-04-18 13:41:56.199691] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.585 [2024-04-18 13:41:56.199722] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.586 [2024-04-18 13:41:56.211430] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.586 [2024-04-18 13:41:56.211472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.586 [2024-04-18 13:41:56.223140] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.586 [2024-04-18 13:41:56.223188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.586 [2024-04-18 13:41:56.234925] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.586 [2024-04-18 13:41:56.234955] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.586 [2024-04-18 13:41:56.246809] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.586 [2024-04-18 13:41:56.246840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.586 [2024-04-18 13:41:56.258700] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.586 [2024-04-18 13:41:56.258731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.586 [2024-04-18 13:41:56.272649] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.586 [2024-04-18 13:41:56.272680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.586 [2024-04-18 13:41:56.283986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.586 [2024-04-18 13:41:56.284017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.586 [2024-04-18 13:41:56.296343] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.586 [2024-04-18 13:41:56.296370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.586 [2024-04-18 13:41:56.308102] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.586 [2024-04-18 13:41:56.308132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.586 [2024-04-18 13:41:56.320146] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.586 [2024-04-18 13:41:56.320184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.586 [2024-04-18 13:41:56.334008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.586 [2024-04-18 13:41:56.334038] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.586 [2024-04-18 13:41:56.345321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.586 [2024-04-18 13:41:56.345348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.586 [2024-04-18 13:41:56.357433] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.586 [2024-04-18 13:41:56.357477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.586 [2024-04-18 13:41:56.369428] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.586 [2024-04-18 13:41:56.369454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.586 [2024-04-18 13:41:56.381539] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.586 [2024-04-18 13:41:56.381570] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.393793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.393824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.406089] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.406120] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.418252] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.418277] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.430050] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.430080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.441726] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.441756] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.453391] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.453426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.465093] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.465124] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.477035] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.477066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.488648] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.488679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.500948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.500978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.513095] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.513125] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.525264] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.525290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.536996] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.537026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.548919] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.548950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.560985] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.561016] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.573280] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.573307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.585431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.585475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.597601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.597632] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.609750] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.609785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.622303] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.622329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.634094] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.634125] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.845 [2024-04-18 13:41:56.646264] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.845 [2024-04-18 13:41:56.646290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.658055] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.658086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.670232] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.670258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.682023] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.682062] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.694374] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.694400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.706662] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.706693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.718599] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.718629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.730584] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.730615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.741648] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.741680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.754744] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.754775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.766906] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.766937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.779919] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.779950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.792439] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.792479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.804889] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.804919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.817230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.817282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.829654] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.829685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.841733] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.841764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.853921] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.853952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.865865] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.865896] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.877693] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.877725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.889736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.889767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.102 [2024-04-18 13:41:56.901964] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.102 [2024-04-18 13:41:56.901995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:56.913784] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:56.913815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:56.925871] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:56.925901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:56.937510] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:56.937541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:56.949386] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:56.949412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:56.961076] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:56.961107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:56.972883] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:56.972914] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:56.984342] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:56.984369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:56.996339] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:56.996366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:57.008401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:57.008428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:57.020593] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:57.020625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:57.032644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:57.032675] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:57.044325] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:57.044352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:57.056402] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:57.056436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:57.068613] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:57.068650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:57.080787] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:57.080818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:57.092946] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:57.092976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:57.105053] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:57.105083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:57.116643] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:57.116686] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:57.128650] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:57.128681] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:57.140913] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:57.140944] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:57.152834] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:57.152865] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.361 [2024-04-18 13:41:57.165568] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.361 [2024-04-18 13:41:57.165609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.178328] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.178355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.190412] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.190438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.202504] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.202546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.214100] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.214130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.226077] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.226107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.237941] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.237971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.250166] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.250204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.262752] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.262783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.274737] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.274772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.286690] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.286731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.299237] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.299268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.311340] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.311366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.323318] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.323344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.335510] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.335554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.347378] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.347405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.359586] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.359617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.371876] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.371906] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.383781] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.383812] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.395994] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.396025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.408290] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.408317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.621 [2024-04-18 13:41:57.420660] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.621 [2024-04-18 13:41:57.420691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.432603] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.432635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.444440] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.444483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.456617] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.456647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.468918] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.468948] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.481232] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.481258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.493760] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.493790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.506365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.506392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.518301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.518327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.530576] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.530606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.542703] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.542734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.554771] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.554801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.566412] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.566439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.579098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.579128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.591548] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.591579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.603579] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.603610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.616113] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.616144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.628499] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.628530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.640601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.640631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.652041] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.652067] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.664809] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.664835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.879 [2024-04-18 13:41:57.674722] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.879 [2024-04-18 13:41:57.674747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.686489] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.686515] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.697635] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.697659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.704810] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.704834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 00:12:55.138 Latency(us) 00:12:55.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.138 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:55.138 Nvme1n1 : 5.01 10523.03 82.21 0.00 0.00 12147.55 4903.06 22913.33 00:12:55.138 =================================================================================================================== 00:12:55.138 Total : 10523.03 82.21 0.00 0.00 12147.55 4903.06 22913.33 00:12:55.138 [2024-04-18 13:41:57.709779] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.709801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.717805] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.717830] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.725818] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.725839] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.733901] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.733946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.741915] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.741960] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.749934] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.749992] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.757955] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.758000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.765971] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.766013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.774002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.774045] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.782017] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.782061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.790043] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.790083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.798075] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.798121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.806091] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.806137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.814114] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.814158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.822131] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.822174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.830148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.830199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.838173] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.838219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.846205] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.846245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.854200] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.854241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.862213] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.862235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.870250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.870272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.878255] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.878278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.886285] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.886313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.894339] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.894379] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.902358] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.902415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.910343] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.910369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.918355] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.918377] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.926381] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.926404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.934401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.934423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.138 [2024-04-18 13:41:57.942449] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.138 [2024-04-18 13:41:57.942496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.397 [2024-04-18 13:41:57.950489] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.397 [2024-04-18 13:41:57.950531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.397 [2024-04-18 13:41:57.958528] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.397 [2024-04-18 13:41:57.958567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.397 [2024-04-18 13:41:57.966504] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.397 [2024-04-18 13:41:57.966539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.397 [2024-04-18 13:41:57.974535] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.397 [2024-04-18 13:41:57.974556] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2585452) - No such process 00:12:55.397 13:41:57 -- target/zcopy.sh@49 -- # wait 2585452 00:12:55.397 13:41:57 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.397 13:41:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.397 13:41:57 -- common/autotest_common.sh@10 -- # set +x 00:12:55.397 13:41:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.397 13:41:57 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:55.397 13:41:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.397 13:41:57 -- common/autotest_common.sh@10 -- # set +x 00:12:55.397 delay0 00:12:55.397 13:41:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.397 13:41:57 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:55.397 13:41:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.397 13:41:57 -- common/autotest_common.sh@10 -- # set +x 00:12:55.397 13:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.397 13:41:58 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:55.397 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.397 [2024-04-18 13:41:58.137324] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:01.957 Initializing NVMe Controllers 00:13:01.957 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:01.957 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:01.957 Initialization complete. Launching workers. 00:13:01.957 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 103 00:13:01.957 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 382, failed to submit 41 00:13:01.957 success 197, unsuccess 185, failed 0 00:13:01.957 13:42:04 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:01.957 13:42:04 -- target/zcopy.sh@60 -- # nvmftestfini 00:13:01.957 13:42:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:01.957 13:42:04 -- nvmf/common.sh@117 -- # sync 00:13:01.957 13:42:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:01.957 13:42:04 -- nvmf/common.sh@120 -- # set +e 00:13:01.957 13:42:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:01.957 13:42:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:01.957 rmmod nvme_tcp 00:13:01.957 rmmod nvme_fabrics 00:13:01.957 rmmod nvme_keyring 00:13:01.957 13:42:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:01.957 13:42:04 -- nvmf/common.sh@124 -- # set -e 00:13:01.957 13:42:04 -- nvmf/common.sh@125 -- # return 0 00:13:01.957 13:42:04 -- nvmf/common.sh@478 -- # '[' -n 2584181 ']' 00:13:01.957 13:42:04 -- nvmf/common.sh@479 -- # killprocess 2584181 00:13:01.957 13:42:04 -- common/autotest_common.sh@936 -- # '[' -z 2584181 ']' 00:13:01.957 13:42:04 -- common/autotest_common.sh@940 -- # kill -0 2584181 00:13:01.957 13:42:04 -- common/autotest_common.sh@941 -- # uname 00:13:01.957 13:42:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:01.957 13:42:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2584181 00:13:01.957 13:42:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:01.957 13:42:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:01.957 13:42:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2584181' 00:13:01.957 killing process with pid 2584181 00:13:01.957 13:42:04 -- common/autotest_common.sh@955 -- # kill 2584181 00:13:01.957 13:42:04 -- common/autotest_common.sh@960 -- # wait 2584181 00:13:01.957 13:42:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:01.957 13:42:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:01.957 13:42:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:01.957 13:42:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:01.957 13:42:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:01.957 13:42:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.957 13:42:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.957 13:42:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.491 13:42:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:04.491 00:13:04.491 real 0m27.871s 00:13:04.491 user 0m40.396s 00:13:04.491 sys 0m8.996s 00:13:04.491 13:42:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:04.491 13:42:06 -- common/autotest_common.sh@10 -- # set +x 00:13:04.491 ************************************ 00:13:04.491 END TEST nvmf_zcopy 00:13:04.491 ************************************ 00:13:04.491 13:42:06 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:04.491 13:42:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:04.491 13:42:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:04.491 13:42:06 -- common/autotest_common.sh@10 -- # set +x 00:13:04.491 ************************************ 00:13:04.491 START TEST nvmf_nmic 00:13:04.491 ************************************ 00:13:04.491 13:42:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:04.491 * Looking for test storage... 00:13:04.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.491 13:42:06 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.491 13:42:06 -- nvmf/common.sh@7 -- # uname -s 00:13:04.491 13:42:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.491 13:42:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.491 13:42:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.491 13:42:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.491 13:42:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.491 13:42:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.491 13:42:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.491 13:42:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.491 13:42:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.491 13:42:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.491 13:42:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:04.491 13:42:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:04.491 13:42:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.491 13:42:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.491 13:42:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.491 13:42:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.491 13:42:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.491 13:42:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.491 13:42:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.491 13:42:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.491 13:42:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.491 13:42:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.491 13:42:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.491 13:42:06 -- paths/export.sh@5 -- # export PATH 00:13:04.491 13:42:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.491 13:42:06 -- nvmf/common.sh@47 -- # : 0 00:13:04.491 13:42:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.491 13:42:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.491 13:42:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.491 13:42:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.491 13:42:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.491 13:42:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.491 13:42:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.491 13:42:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.491 13:42:06 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:04.491 13:42:06 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:04.491 13:42:06 -- target/nmic.sh@14 -- # nvmftestinit 00:13:04.491 13:42:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:04.491 13:42:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.491 13:42:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:04.491 13:42:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:04.491 13:42:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:04.491 13:42:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.491 13:42:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.491 13:42:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.491 13:42:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:04.491 13:42:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:04.491 13:42:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:04.491 13:42:06 -- common/autotest_common.sh@10 -- # set +x 00:13:06.390 13:42:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:06.390 13:42:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:06.390 13:42:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:06.390 13:42:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:06.390 13:42:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:06.390 13:42:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:06.390 13:42:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:06.390 13:42:08 -- nvmf/common.sh@295 -- # net_devs=() 00:13:06.390 13:42:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:06.390 13:42:08 -- nvmf/common.sh@296 -- # e810=() 00:13:06.390 13:42:08 -- nvmf/common.sh@296 -- # local -ga e810 00:13:06.390 13:42:08 -- nvmf/common.sh@297 -- # x722=() 00:13:06.390 13:42:08 -- nvmf/common.sh@297 -- # local -ga x722 00:13:06.390 13:42:08 -- nvmf/common.sh@298 -- # mlx=() 00:13:06.390 13:42:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:06.390 13:42:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.390 13:42:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.390 13:42:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.390 13:42:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.390 13:42:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.390 13:42:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.390 13:42:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.390 13:42:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.390 13:42:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.390 13:42:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.390 13:42:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.390 13:42:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:06.390 13:42:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:06.390 13:42:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:06.390 13:42:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:06.390 13:42:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:06.390 13:42:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:06.390 13:42:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.390 13:42:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:06.390 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:06.390 13:42:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.390 13:42:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.390 13:42:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.390 13:42:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.390 13:42:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.390 13:42:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.390 13:42:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:06.390 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:06.390 13:42:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.390 13:42:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.390 13:42:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.390 13:42:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.390 13:42:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.390 13:42:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:06.390 13:42:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:06.390 13:42:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:06.390 13:42:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.390 13:42:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.390 13:42:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:06.390 13:42:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.390 13:42:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:06.390 Found net devices under 0000:84:00.0: cvl_0_0 00:13:06.390 13:42:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.390 13:42:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.391 13:42:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.391 13:42:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:06.391 13:42:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.391 13:42:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:06.391 Found net devices under 0000:84:00.1: cvl_0_1 00:13:06.391 13:42:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.391 13:42:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:06.391 13:42:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:06.391 13:42:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:06.391 13:42:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:06.391 13:42:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:06.391 13:42:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.391 13:42:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.391 13:42:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.391 13:42:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:06.391 13:42:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.391 13:42:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.391 13:42:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:06.391 13:42:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.391 13:42:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.391 13:42:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:06.391 13:42:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:06.391 13:42:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.391 13:42:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.391 13:42:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.391 13:42:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.391 13:42:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:06.391 13:42:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.391 13:42:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.391 13:42:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.391 13:42:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:06.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:13:06.391 00:13:06.391 --- 10.0.0.2 ping statistics --- 00:13:06.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.391 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:13:06.391 13:42:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:13:06.391 00:13:06.391 --- 10.0.0.1 ping statistics --- 00:13:06.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.391 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:13:06.391 13:42:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.391 13:42:08 -- nvmf/common.sh@411 -- # return 0 00:13:06.391 13:42:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:06.391 13:42:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.391 13:42:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:06.391 13:42:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:06.391 13:42:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.391 13:42:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:06.391 13:42:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:06.391 13:42:09 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:06.391 13:42:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:06.391 13:42:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:06.391 13:42:09 -- common/autotest_common.sh@10 -- # set +x 00:13:06.391 13:42:09 -- nvmf/common.sh@470 -- # nvmfpid=2589449 00:13:06.391 13:42:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.391 13:42:09 -- nvmf/common.sh@471 -- # waitforlisten 2589449 00:13:06.391 13:42:09 -- common/autotest_common.sh@817 -- # '[' -z 2589449 ']' 00:13:06.391 13:42:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.391 13:42:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:06.391 13:42:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.391 13:42:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:06.391 13:42:09 -- common/autotest_common.sh@10 -- # set +x 00:13:06.391 [2024-04-18 13:42:09.057524] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:13:06.391 [2024-04-18 13:42:09.057630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.391 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.391 [2024-04-18 13:42:09.126111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.649 [2024-04-18 13:42:09.240959] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.649 [2024-04-18 13:42:09.241029] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.649 [2024-04-18 13:42:09.241052] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.649 [2024-04-18 13:42:09.241063] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.649 [2024-04-18 13:42:09.241073] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.649 [2024-04-18 13:42:09.241154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.649 [2024-04-18 13:42:09.241235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.649 [2024-04-18 13:42:09.241291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.649 [2024-04-18 13:42:09.241294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.215 13:42:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:07.215 13:42:09 -- common/autotest_common.sh@850 -- # return 0 00:13:07.215 13:42:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:07.215 13:42:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:07.215 13:42:09 -- common/autotest_common.sh@10 -- # set +x 00:13:07.215 13:42:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.215 13:42:10 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:07.215 13:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.215 13:42:10 -- common/autotest_common.sh@10 -- # set +x 00:13:07.475 [2024-04-18 13:42:10.023972] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.475 13:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.475 13:42:10 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:07.475 13:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.475 13:42:10 -- common/autotest_common.sh@10 -- # set +x 00:13:07.475 Malloc0 00:13:07.475 13:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.475 13:42:10 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:07.475 13:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.475 13:42:10 -- common/autotest_common.sh@10 -- # set +x 00:13:07.475 13:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.475 13:42:10 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:07.475 13:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.475 13:42:10 -- common/autotest_common.sh@10 -- # set +x 00:13:07.475 13:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.475 13:42:10 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.475 13:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.475 13:42:10 -- common/autotest_common.sh@10 -- # set +x 00:13:07.475 [2024-04-18 13:42:10.077540] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.475 13:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.475 13:42:10 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:07.475 test case1: single bdev can't be used in multiple subsystems 00:13:07.475 13:42:10 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:07.475 13:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.475 13:42:10 -- common/autotest_common.sh@10 -- # set +x 00:13:07.475 13:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.475 13:42:10 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:07.475 13:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.475 13:42:10 -- common/autotest_common.sh@10 -- # set +x 00:13:07.475 13:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.475 13:42:10 -- target/nmic.sh@28 -- # nmic_status=0 00:13:07.475 13:42:10 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:07.475 13:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.475 13:42:10 -- common/autotest_common.sh@10 -- # set +x 00:13:07.475 [2024-04-18 13:42:10.101346] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:07.475 [2024-04-18 13:42:10.101377] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:07.475 [2024-04-18 13:42:10.101393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.475 request: 00:13:07.475 { 00:13:07.475 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:07.475 "namespace": { 00:13:07.475 "bdev_name": "Malloc0", 00:13:07.475 "no_auto_visible": false 00:13:07.475 }, 00:13:07.475 "method": "nvmf_subsystem_add_ns", 00:13:07.475 "req_id": 1 00:13:07.475 } 00:13:07.475 Got JSON-RPC error response 00:13:07.475 response: 00:13:07.475 { 00:13:07.475 "code": -32602, 00:13:07.475 "message": "Invalid parameters" 00:13:07.475 } 00:13:07.475 13:42:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:13:07.475 13:42:10 -- target/nmic.sh@29 -- # nmic_status=1 00:13:07.475 13:42:10 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:07.475 13:42:10 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:07.475 Adding namespace failed - expected result. 00:13:07.475 13:42:10 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:07.475 test case2: host connect to nvmf target in multiple paths 00:13:07.475 13:42:10 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:07.475 13:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.475 13:42:10 -- common/autotest_common.sh@10 -- # set +x 00:13:07.475 [2024-04-18 13:42:10.109464] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:07.475 13:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.475 13:42:10 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.041 13:42:10 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:08.643 13:42:11 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.643 13:42:11 -- common/autotest_common.sh@1184 -- # local i=0 00:13:08.643 13:42:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.643 13:42:11 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:08.643 13:42:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:10.547 13:42:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:10.547 13:42:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:10.547 13:42:13 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.547 13:42:13 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:10.547 13:42:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.547 13:42:13 -- common/autotest_common.sh@1194 -- # return 0 00:13:10.547 13:42:13 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:10.547 [global] 00:13:10.547 thread=1 00:13:10.547 invalidate=1 00:13:10.547 rw=write 00:13:10.547 time_based=1 00:13:10.547 runtime=1 00:13:10.547 ioengine=libaio 00:13:10.547 direct=1 00:13:10.547 bs=4096 00:13:10.547 iodepth=1 00:13:10.547 norandommap=0 00:13:10.547 numjobs=1 00:13:10.547 00:13:10.547 verify_dump=1 00:13:10.547 verify_backlog=512 00:13:10.547 verify_state_save=0 00:13:10.547 do_verify=1 00:13:10.547 verify=crc32c-intel 00:13:10.547 [job0] 00:13:10.547 filename=/dev/nvme0n1 00:13:10.805 Could not set queue depth (nvme0n1) 00:13:10.805 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:10.805 fio-3.35 00:13:10.805 Starting 1 thread 00:13:12.179 00:13:12.179 job0: (groupid=0, jobs=1): err= 0: pid=2590113: Thu Apr 18 13:42:14 2024 00:13:12.179 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:12.179 slat (nsec): min=5162, max=59162, avg=12727.29, stdev=6099.83 00:13:12.179 clat (usec): min=262, max=1105, avg=328.12, stdev=75.33 00:13:12.179 lat (usec): min=269, max=1115, avg=340.84, stdev=79.10 00:13:12.179 clat percentiles (usec): 00:13:12.179 | 1.00th=[ 269], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 285], 00:13:12.179 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 314], 00:13:12.179 | 70.00th=[ 322], 80.00th=[ 343], 90.00th=[ 392], 95.00th=[ 510], 00:13:12.179 | 99.00th=[ 644], 99.50th=[ 676], 99.90th=[ 701], 99.95th=[ 1106], 00:13:12.179 | 99.99th=[ 1106] 00:13:12.179 write: IOPS=1688, BW=6753KiB/s (6915kB/s)(6760KiB/1001msec); 0 zone resets 00:13:12.179 slat (usec): min=6, max=28854, avg=33.89, stdev=701.57 00:13:12.179 clat (usec): min=148, max=565, avg=240.48, stdev=97.78 00:13:12.179 lat (usec): min=155, max=29313, avg=274.37, stdev=714.66 00:13:12.179 clat percentiles (usec): 00:13:12.179 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:13:12.179 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 210], 00:13:12.179 | 70.00th=[ 243], 80.00th=[ 302], 90.00th=[ 433], 95.00th=[ 465], 00:13:12.179 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 545], 99.95th=[ 570], 00:13:12.179 | 99.99th=[ 570] 00:13:12.179 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:13:12.179 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:12.179 lat (usec) : 250=37.76%, 500=58.96%, 750=3.25% 00:13:12.179 lat (msec) : 2=0.03% 00:13:12.179 cpu : usr=2.60%, sys=4.70%, ctx=3229, majf=0, minf=2 00:13:12.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:12.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.180 issued rwts: total=1536,1690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:12.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:12.180 00:13:12.180 Run status group 0 (all jobs): 00:13:12.180 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:13:12.180 WRITE: bw=6753KiB/s (6915kB/s), 6753KiB/s-6753KiB/s (6915kB/s-6915kB/s), io=6760KiB (6922kB), run=1001-1001msec 00:13:12.180 00:13:12.180 Disk stats (read/write): 00:13:12.180 nvme0n1: ios=1547/1536, merge=0/0, ticks=822/325, in_queue=1147, util=98.60% 00:13:12.180 13:42:14 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:12.180 13:42:14 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.180 13:42:14 -- common/autotest_common.sh@1205 -- # local i=0 00:13:12.180 13:42:14 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:12.180 13:42:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.180 13:42:14 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:12.180 13:42:14 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.180 13:42:14 -- common/autotest_common.sh@1217 -- # return 0 00:13:12.180 13:42:14 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:12.180 13:42:14 -- target/nmic.sh@53 -- # nvmftestfini 00:13:12.180 13:42:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:12.180 13:42:14 -- nvmf/common.sh@117 -- # sync 00:13:12.180 13:42:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:12.180 13:42:14 -- nvmf/common.sh@120 -- # set +e 00:13:12.180 13:42:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:12.180 13:42:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:12.180 rmmod nvme_tcp 00:13:12.180 rmmod nvme_fabrics 00:13:12.180 rmmod nvme_keyring 00:13:12.180 13:42:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:12.180 13:42:14 -- nvmf/common.sh@124 -- # set -e 00:13:12.180 13:42:14 -- nvmf/common.sh@125 -- # return 0 00:13:12.180 13:42:14 -- nvmf/common.sh@478 -- # '[' -n 2589449 ']' 00:13:12.180 13:42:14 -- nvmf/common.sh@479 -- # killprocess 2589449 00:13:12.180 13:42:14 -- common/autotest_common.sh@936 -- # '[' -z 2589449 ']' 00:13:12.180 13:42:14 -- common/autotest_common.sh@940 -- # kill -0 2589449 00:13:12.180 13:42:14 -- common/autotest_common.sh@941 -- # uname 00:13:12.180 13:42:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:12.180 13:42:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2589449 00:13:12.180 13:42:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:12.180 13:42:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:12.180 13:42:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2589449' 00:13:12.180 killing process with pid 2589449 00:13:12.180 13:42:14 -- common/autotest_common.sh@955 -- # kill 2589449 00:13:12.180 13:42:14 -- common/autotest_common.sh@960 -- # wait 2589449 00:13:12.438 13:42:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:12.438 13:42:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:12.438 13:42:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:12.438 13:42:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:12.438 13:42:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:12.438 13:42:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.438 13:42:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.438 13:42:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.969 13:42:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:14.969 00:13:14.969 real 0m10.459s 00:13:14.969 user 0m24.667s 00:13:14.969 sys 0m2.432s 00:13:14.969 13:42:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:14.969 13:42:17 -- common/autotest_common.sh@10 -- # set +x 00:13:14.969 ************************************ 00:13:14.969 END TEST nvmf_nmic 00:13:14.969 ************************************ 00:13:14.969 13:42:17 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:14.969 13:42:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:14.969 13:42:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:14.969 13:42:17 -- common/autotest_common.sh@10 -- # set +x 00:13:14.969 ************************************ 00:13:14.969 START TEST nvmf_fio_target 00:13:14.969 ************************************ 00:13:14.969 13:42:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:14.969 * Looking for test storage... 00:13:14.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.969 13:42:17 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.969 13:42:17 -- nvmf/common.sh@7 -- # uname -s 00:13:14.969 13:42:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.969 13:42:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.969 13:42:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.969 13:42:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.969 13:42:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.969 13:42:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.969 13:42:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.969 13:42:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.969 13:42:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.969 13:42:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.969 13:42:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:14.969 13:42:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:14.969 13:42:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.969 13:42:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.969 13:42:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.969 13:42:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.969 13:42:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.969 13:42:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.969 13:42:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.969 13:42:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.969 13:42:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.969 13:42:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.969 13:42:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.969 13:42:17 -- paths/export.sh@5 -- # export PATH 00:13:14.969 13:42:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.969 13:42:17 -- nvmf/common.sh@47 -- # : 0 00:13:14.969 13:42:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:14.969 13:42:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:14.969 13:42:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.969 13:42:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.969 13:42:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.969 13:42:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:14.969 13:42:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:14.969 13:42:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:14.969 13:42:17 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:14.969 13:42:17 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:14.969 13:42:17 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:14.969 13:42:17 -- target/fio.sh@16 -- # nvmftestinit 00:13:14.969 13:42:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:14.969 13:42:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.969 13:42:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:14.969 13:42:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:14.969 13:42:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:14.969 13:42:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.969 13:42:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.969 13:42:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.969 13:42:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:14.969 13:42:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:14.969 13:42:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:14.969 13:42:17 -- common/autotest_common.sh@10 -- # set +x 00:13:16.867 13:42:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:16.867 13:42:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:16.867 13:42:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:16.867 13:42:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:16.867 13:42:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:16.867 13:42:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:16.867 13:42:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:16.867 13:42:19 -- nvmf/common.sh@295 -- # net_devs=() 00:13:16.867 13:42:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:16.867 13:42:19 -- nvmf/common.sh@296 -- # e810=() 00:13:16.867 13:42:19 -- nvmf/common.sh@296 -- # local -ga e810 00:13:16.867 13:42:19 -- nvmf/common.sh@297 -- # x722=() 00:13:16.867 13:42:19 -- nvmf/common.sh@297 -- # local -ga x722 00:13:16.867 13:42:19 -- nvmf/common.sh@298 -- # mlx=() 00:13:16.867 13:42:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:16.867 13:42:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.867 13:42:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.867 13:42:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.867 13:42:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.867 13:42:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.867 13:42:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.867 13:42:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.867 13:42:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.867 13:42:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.867 13:42:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.867 13:42:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.867 13:42:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:16.867 13:42:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:16.867 13:42:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:16.867 13:42:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.867 13:42:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:16.867 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:16.867 13:42:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.867 13:42:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:16.867 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:16.867 13:42:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:16.867 13:42:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.867 13:42:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.867 13:42:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:16.867 13:42:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.867 13:42:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:16.867 Found net devices under 0000:84:00.0: cvl_0_0 00:13:16.867 13:42:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.867 13:42:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.867 13:42:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.867 13:42:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:16.867 13:42:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.867 13:42:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:16.867 Found net devices under 0000:84:00.1: cvl_0_1 00:13:16.867 13:42:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.867 13:42:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:16.867 13:42:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:16.867 13:42:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:16.867 13:42:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:16.867 13:42:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.867 13:42:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.867 13:42:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.867 13:42:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:16.867 13:42:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.867 13:42:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.867 13:42:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:16.867 13:42:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.867 13:42:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.867 13:42:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:16.867 13:42:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:16.867 13:42:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.867 13:42:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.867 13:42:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.867 13:42:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.867 13:42:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:16.867 13:42:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.867 13:42:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.867 13:42:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.868 13:42:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:16.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:13:16.868 00:13:16.868 --- 10.0.0.2 ping statistics --- 00:13:16.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.868 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:13:16.868 13:42:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:13:16.868 00:13:16.868 --- 10.0.0.1 ping statistics --- 00:13:16.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.868 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:13:16.868 13:42:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.868 13:42:19 -- nvmf/common.sh@411 -- # return 0 00:13:16.868 13:42:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:16.868 13:42:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.868 13:42:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:16.868 13:42:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:16.868 13:42:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.868 13:42:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:16.868 13:42:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:17.125 13:42:19 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:17.125 13:42:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:17.125 13:42:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:17.125 13:42:19 -- common/autotest_common.sh@10 -- # set +x 00:13:17.125 13:42:19 -- nvmf/common.sh@470 -- # nvmfpid=2592326 00:13:17.125 13:42:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:17.125 13:42:19 -- nvmf/common.sh@471 -- # waitforlisten 2592326 00:13:17.125 13:42:19 -- common/autotest_common.sh@817 -- # '[' -z 2592326 ']' 00:13:17.126 13:42:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.126 13:42:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:17.126 13:42:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.126 13:42:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:17.126 13:42:19 -- common/autotest_common.sh@10 -- # set +x 00:13:17.126 [2024-04-18 13:42:19.731018] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:13:17.126 [2024-04-18 13:42:19.731111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.126 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.126 [2024-04-18 13:42:19.796403] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.126 [2024-04-18 13:42:19.910492] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.126 [2024-04-18 13:42:19.910544] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.126 [2024-04-18 13:42:19.910574] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.126 [2024-04-18 13:42:19.910587] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.126 [2024-04-18 13:42:19.910598] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.126 [2024-04-18 13:42:19.910669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.126 [2024-04-18 13:42:19.912201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.126 [2024-04-18 13:42:19.912236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.126 [2024-04-18 13:42:19.912240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.383 13:42:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:17.383 13:42:20 -- common/autotest_common.sh@850 -- # return 0 00:13:17.383 13:42:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:17.383 13:42:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:17.383 13:42:20 -- common/autotest_common.sh@10 -- # set +x 00:13:17.383 13:42:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.383 13:42:20 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:17.641 [2024-04-18 13:42:20.325953] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.641 13:42:20 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:17.899 13:42:20 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:17.899 13:42:20 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.156 13:42:20 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:18.156 13:42:20 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.722 13:42:21 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:18.722 13:42:21 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.722 13:42:21 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:18.722 13:42:21 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:18.979 13:42:21 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:19.236 13:42:22 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:19.236 13:42:22 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:19.494 13:42:22 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:19.494 13:42:22 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:19.752 13:42:22 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:19.752 13:42:22 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:20.009 13:42:22 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:20.266 13:42:23 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:20.266 13:42:23 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:20.524 13:42:23 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:20.524 13:42:23 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.781 13:42:23 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.038 [2024-04-18 13:42:23.721854] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.038 13:42:23 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:21.295 13:42:23 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:21.553 13:42:24 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:22.117 13:42:24 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:22.117 13:42:24 -- common/autotest_common.sh@1184 -- # local i=0 00:13:22.117 13:42:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:22.117 13:42:24 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:13:22.117 13:42:24 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:13:22.117 13:42:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:24.053 13:42:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:24.053 13:42:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:24.053 13:42:26 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.053 13:42:26 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:13:24.053 13:42:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.053 13:42:26 -- common/autotest_common.sh@1194 -- # return 0 00:13:24.053 13:42:26 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:24.347 [global] 00:13:24.347 thread=1 00:13:24.347 invalidate=1 00:13:24.347 rw=write 00:13:24.347 time_based=1 00:13:24.347 runtime=1 00:13:24.347 ioengine=libaio 00:13:24.347 direct=1 00:13:24.347 bs=4096 00:13:24.347 iodepth=1 00:13:24.347 norandommap=0 00:13:24.347 numjobs=1 00:13:24.347 00:13:24.347 verify_dump=1 00:13:24.347 verify_backlog=512 00:13:24.347 verify_state_save=0 00:13:24.347 do_verify=1 00:13:24.347 verify=crc32c-intel 00:13:24.347 [job0] 00:13:24.347 filename=/dev/nvme0n1 00:13:24.347 [job1] 00:13:24.347 filename=/dev/nvme0n2 00:13:24.347 [job2] 00:13:24.347 filename=/dev/nvme0n3 00:13:24.347 [job3] 00:13:24.347 filename=/dev/nvme0n4 00:13:24.347 Could not set queue depth (nvme0n1) 00:13:24.347 Could not set queue depth (nvme0n2) 00:13:24.347 Could not set queue depth (nvme0n3) 00:13:24.347 Could not set queue depth (nvme0n4) 00:13:24.347 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.347 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.347 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.347 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.347 fio-3.35 00:13:24.347 Starting 4 threads 00:13:25.718 00:13:25.718 job0: (groupid=0, jobs=1): err= 0: pid=2593289: Thu Apr 18 13:42:28 2024 00:13:25.718 read: IOPS=1754, BW=7017KiB/s (7185kB/s)(7024KiB/1001msec) 00:13:25.718 slat (nsec): min=5872, max=43756, avg=10817.10, stdev=5213.49 00:13:25.718 clat (usec): min=223, max=612, avg=290.62, stdev=58.03 00:13:25.718 lat (usec): min=231, max=645, avg=301.44, stdev=61.80 00:13:25.718 clat percentiles (usec): 00:13:25.718 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 245], 00:13:25.718 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:13:25.718 | 70.00th=[ 289], 80.00th=[ 359], 90.00th=[ 392], 95.00th=[ 404], 00:13:25.718 | 99.00th=[ 441], 99.50th=[ 461], 99.90th=[ 506], 99.95th=[ 611], 00:13:25.718 | 99.99th=[ 611] 00:13:25.718 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:25.718 slat (usec): min=7, max=109, avg=13.56, stdev= 8.11 00:13:25.718 clat (usec): min=149, max=777, avg=209.88, stdev=65.24 00:13:25.718 lat (usec): min=160, max=788, avg=223.44, stdev=70.66 00:13:25.718 clat percentiles (usec): 00:13:25.718 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:13:25.718 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 198], 00:13:25.718 | 70.00th=[ 215], 80.00th=[ 233], 90.00th=[ 273], 95.00th=[ 343], 00:13:25.718 | 99.00th=[ 469], 99.50th=[ 474], 99.90th=[ 652], 99.95th=[ 734], 00:13:25.718 | 99.99th=[ 775] 00:13:25.718 bw ( KiB/s): min= 8256, max= 8256, per=59.49%, avg=8256.00, stdev= 0.00, samples=1 00:13:25.718 iops : min= 2064, max= 2064, avg=2064.00, stdev= 0.00, samples=1 00:13:25.718 lat (usec) : 250=58.15%, 500=41.69%, 750=0.13%, 1000=0.03% 00:13:25.718 cpu : usr=2.60%, sys=4.70%, ctx=3806, majf=0, minf=1 00:13:25.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.718 issued rwts: total=1756,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.718 job1: (groupid=0, jobs=1): err= 0: pid=2593290: Thu Apr 18 13:42:28 2024 00:13:25.718 read: IOPS=22, BW=89.1KiB/s (91.2kB/s)(92.0KiB/1033msec) 00:13:25.718 slat (nsec): min=7741, max=14926, avg=14295.39, stdev=1435.53 00:13:25.718 clat (usec): min=387, max=41081, avg=39036.34, stdev=8458.65 00:13:25.718 lat (usec): min=402, max=41096, avg=39050.64, stdev=8458.61 00:13:25.718 clat percentiles (usec): 00:13:25.718 | 1.00th=[ 388], 5.00th=[37487], 10.00th=[40633], 20.00th=[41157], 00:13:25.718 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:25.718 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:25.719 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:25.719 | 99.99th=[41157] 00:13:25.719 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:13:25.719 slat (usec): min=6, max=1207, avg=12.94, stdev=53.33 00:13:25.719 clat (usec): min=163, max=1540, avg=247.00, stdev=97.14 00:13:25.719 lat (usec): min=170, max=1578, avg=259.94, stdev=114.22 00:13:25.719 clat percentiles (usec): 00:13:25.719 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 194], 00:13:25.719 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 223], 60.00th=[ 233], 00:13:25.719 | 70.00th=[ 249], 80.00th=[ 269], 90.00th=[ 347], 95.00th=[ 404], 00:13:25.719 | 99.00th=[ 515], 99.50th=[ 799], 99.90th=[ 1549], 99.95th=[ 1549], 00:13:25.719 | 99.99th=[ 1549] 00:13:25.719 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:13:25.719 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:25.719 lat (usec) : 250=67.48%, 500=27.29%, 750=0.56%, 1000=0.37% 00:13:25.719 lat (msec) : 2=0.19%, 50=4.11% 00:13:25.719 cpu : usr=0.39%, sys=0.39%, ctx=538, majf=0, minf=2 00:13:25.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.719 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.719 job2: (groupid=0, jobs=1): err= 0: pid=2593292: Thu Apr 18 13:42:28 2024 00:13:25.719 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:13:25.719 slat (nsec): min=9980, max=15817, avg=14703.77, stdev=1085.61 00:13:25.719 clat (usec): min=36029, max=42030, avg=40846.20, stdev=1115.07 00:13:25.719 lat (usec): min=36044, max=42045, avg=40860.91, stdev=1114.99 00:13:25.719 clat percentiles (usec): 00:13:25.719 | 1.00th=[35914], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:25.719 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:25.719 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:13:25.719 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:25.719 | 99.99th=[42206] 00:13:25.719 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:13:25.719 slat (usec): min=6, max=154, avg=11.60, stdev= 9.94 00:13:25.719 clat (usec): min=185, max=1058, avg=243.80, stdev=59.47 00:13:25.719 lat (usec): min=193, max=1068, avg=255.40, stdev=61.32 00:13:25.719 clat percentiles (usec): 00:13:25.719 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 215], 00:13:25.719 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:13:25.719 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 285], 95.00th=[ 347], 00:13:25.719 | 99.00th=[ 433], 99.50th=[ 457], 99.90th=[ 1057], 99.95th=[ 1057], 00:13:25.719 | 99.99th=[ 1057] 00:13:25.719 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:13:25.719 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:25.719 lat (usec) : 250=72.47%, 500=23.03%, 750=0.19% 00:13:25.719 lat (msec) : 2=0.19%, 50=4.12% 00:13:25.719 cpu : usr=0.19%, sys=0.58%, ctx=536, majf=0, minf=1 00:13:25.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.719 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.719 job3: (groupid=0, jobs=1): err= 0: pid=2593293: Thu Apr 18 13:42:28 2024 00:13:25.719 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:13:25.719 slat (nsec): min=8608, max=43426, avg=18758.77, stdev=11588.08 00:13:25.719 clat (usec): min=40474, max=41734, avg=41002.63, stdev=215.96 00:13:25.719 lat (usec): min=40482, max=41769, avg=41021.39, stdev=217.64 00:13:25.719 clat percentiles (usec): 00:13:25.719 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:25.719 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:25.719 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:25.719 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:25.719 | 99.99th=[41681] 00:13:25.719 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:13:25.719 slat (nsec): min=6410, max=45735, avg=9814.59, stdev=4357.94 00:13:25.719 clat (usec): min=179, max=676, avg=237.32, stdev=50.23 00:13:25.719 lat (usec): min=185, max=685, avg=247.14, stdev=51.34 00:13:25.719 clat percentiles (usec): 00:13:25.719 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:13:25.719 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:13:25.719 | 70.00th=[ 241], 80.00th=[ 253], 90.00th=[ 281], 95.00th=[ 326], 00:13:25.719 | 99.00th=[ 457], 99.50th=[ 469], 99.90th=[ 676], 99.95th=[ 676], 00:13:25.719 | 99.99th=[ 676] 00:13:25.719 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:13:25.719 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:25.719 lat (usec) : 250=74.53%, 500=21.16%, 750=0.19% 00:13:25.719 lat (msec) : 50=4.12% 00:13:25.719 cpu : usr=0.19%, sys=0.58%, ctx=534, majf=0, minf=1 00:13:25.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.719 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.719 00:13:25.719 Run status group 0 (all jobs): 00:13:25.719 READ: bw=7059KiB/s (7228kB/s), 85.3KiB/s-7017KiB/s (87.3kB/s-7185kB/s), io=7292KiB (7467kB), run=1001-1033msec 00:13:25.719 WRITE: bw=13.6MiB/s (14.2MB/s), 1983KiB/s-8184KiB/s (2030kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1033msec 00:13:25.719 00:13:25.719 Disk stats (read/write): 00:13:25.719 nvme0n1: ios=1559/1708, merge=0/0, ticks=1311/341, in_queue=1652, util=85.47% 00:13:25.719 nvme0n2: ios=72/512, merge=0/0, ticks=852/124, in_queue=976, util=89.42% 00:13:25.719 nvme0n3: ios=74/512, merge=0/0, ticks=1071/123, in_queue=1194, util=93.42% 00:13:25.719 nvme0n4: ios=74/512, merge=0/0, ticks=790/118, in_queue=908, util=95.89% 00:13:25.719 13:42:28 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:25.719 [global] 00:13:25.719 thread=1 00:13:25.719 invalidate=1 00:13:25.719 rw=randwrite 00:13:25.719 time_based=1 00:13:25.719 runtime=1 00:13:25.719 ioengine=libaio 00:13:25.719 direct=1 00:13:25.719 bs=4096 00:13:25.719 iodepth=1 00:13:25.719 norandommap=0 00:13:25.719 numjobs=1 00:13:25.719 00:13:25.719 verify_dump=1 00:13:25.719 verify_backlog=512 00:13:25.719 verify_state_save=0 00:13:25.719 do_verify=1 00:13:25.719 verify=crc32c-intel 00:13:25.719 [job0] 00:13:25.719 filename=/dev/nvme0n1 00:13:25.719 [job1] 00:13:25.719 filename=/dev/nvme0n2 00:13:25.719 [job2] 00:13:25.719 filename=/dev/nvme0n3 00:13:25.719 [job3] 00:13:25.719 filename=/dev/nvme0n4 00:13:25.719 Could not set queue depth (nvme0n1) 00:13:25.719 Could not set queue depth (nvme0n2) 00:13:25.719 Could not set queue depth (nvme0n3) 00:13:25.719 Could not set queue depth (nvme0n4) 00:13:25.977 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.977 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.977 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.977 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.977 fio-3.35 00:13:25.977 Starting 4 threads 00:13:27.348 00:13:27.348 job0: (groupid=0, jobs=1): err= 0: pid=2593636: Thu Apr 18 13:42:29 2024 00:13:27.348 read: IOPS=33, BW=132KiB/s (136kB/s)(136KiB/1027msec) 00:13:27.348 slat (nsec): min=5602, max=39139, avg=13913.94, stdev=5472.91 00:13:27.348 clat (usec): min=235, max=41071, avg=26651.99, stdev=19669.77 00:13:27.348 lat (usec): min=241, max=41088, avg=26665.91, stdev=19672.10 00:13:27.348 clat percentiles (usec): 00:13:27.348 | 1.00th=[ 237], 5.00th=[ 239], 10.00th=[ 285], 20.00th=[ 461], 00:13:27.348 | 30.00th=[ 578], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:13:27.348 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:27.348 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:27.348 | 99.99th=[41157] 00:13:27.348 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:13:27.348 slat (nsec): min=7026, max=55736, avg=10406.04, stdev=5297.10 00:13:27.348 clat (usec): min=175, max=327, avg=220.25, stdev=21.51 00:13:27.348 lat (usec): min=183, max=368, avg=230.66, stdev=22.69 00:13:27.348 clat percentiles (usec): 00:13:27.348 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:13:27.348 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 225], 00:13:27.348 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 258], 00:13:27.348 | 99.00th=[ 285], 99.50th=[ 310], 99.90th=[ 326], 99.95th=[ 326], 00:13:27.348 | 99.99th=[ 326] 00:13:27.348 bw ( KiB/s): min= 4096, max= 4096, per=29.74%, avg=4096.00, stdev= 0.00, samples=1 00:13:27.348 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:27.348 lat (usec) : 250=86.45%, 500=8.79%, 750=0.73% 00:13:27.348 lat (msec) : 50=4.03% 00:13:27.348 cpu : usr=0.10%, sys=0.68%, ctx=548, majf=0, minf=1 00:13:27.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:27.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.348 issued rwts: total=34,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:27.348 job1: (groupid=0, jobs=1): err= 0: pid=2593637: Thu Apr 18 13:42:29 2024 00:13:27.348 read: IOPS=21, BW=86.2KiB/s (88.3kB/s)(88.0KiB/1021msec) 00:13:27.348 slat (nsec): min=8462, max=38304, avg=15727.36, stdev=5554.95 00:13:27.348 clat (usec): min=40661, max=41155, avg=40973.96, stdev=94.89 00:13:27.348 lat (usec): min=40669, max=41173, avg=40989.69, stdev=94.21 00:13:27.348 clat percentiles (usec): 00:13:27.348 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:27.348 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:27.348 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:27.348 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:27.348 | 99.99th=[41157] 00:13:27.348 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:13:27.348 slat (nsec): min=8648, max=46153, avg=11332.74, stdev=3043.02 00:13:27.348 clat (usec): min=171, max=413, avg=216.98, stdev=22.48 00:13:27.348 lat (usec): min=180, max=425, avg=228.31, stdev=22.99 00:13:27.348 clat percentiles (usec): 00:13:27.348 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 200], 00:13:27.348 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:13:27.348 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 251], 00:13:27.348 | 99.00th=[ 285], 99.50th=[ 322], 99.90th=[ 412], 99.95th=[ 412], 00:13:27.348 | 99.99th=[ 412] 00:13:27.348 bw ( KiB/s): min= 4096, max= 4096, per=29.74%, avg=4096.00, stdev= 0.00, samples=1 00:13:27.348 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:27.348 lat (usec) : 250=91.01%, 500=4.87% 00:13:27.348 lat (msec) : 50=4.12% 00:13:27.348 cpu : usr=0.59%, sys=0.59%, ctx=535, majf=0, minf=2 00:13:27.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:27.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.348 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:27.348 job2: (groupid=0, jobs=1): err= 0: pid=2593638: Thu Apr 18 13:42:29 2024 00:13:27.348 read: IOPS=21, BW=84.5KiB/s (86.6kB/s)(88.0KiB/1041msec) 00:13:27.348 slat (nsec): min=7649, max=39359, avg=16899.86, stdev=7619.81 00:13:27.348 clat (usec): min=40841, max=41225, avg=40985.87, stdev=78.27 00:13:27.348 lat (usec): min=40880, max=41242, avg=41002.77, stdev=77.70 00:13:27.348 clat percentiles (usec): 00:13:27.348 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:27.348 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:27.348 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:27.348 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:27.348 | 99.99th=[41157] 00:13:27.348 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:13:27.348 slat (nsec): min=8070, max=48798, avg=12307.85, stdev=4637.06 00:13:27.348 clat (usec): min=198, max=2646, avg=254.50, stdev=122.50 00:13:27.348 lat (usec): min=210, max=2663, avg=266.81, stdev=122.77 00:13:27.348 clat percentiles (usec): 00:13:27.348 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 235], 00:13:27.348 | 30.00th=[ 241], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:13:27.348 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 285], 00:13:27.348 | 99.00th=[ 359], 99.50th=[ 437], 99.90th=[ 2638], 99.95th=[ 2638], 00:13:27.348 | 99.99th=[ 2638] 00:13:27.348 bw ( KiB/s): min= 4096, max= 4096, per=29.74%, avg=4096.00, stdev= 0.00, samples=1 00:13:27.348 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:27.348 lat (usec) : 250=67.04%, 500=28.46% 00:13:27.348 lat (msec) : 2=0.19%, 4=0.19%, 50=4.12% 00:13:27.348 cpu : usr=0.29%, sys=0.87%, ctx=536, majf=0, minf=1 00:13:27.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:27.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.348 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:27.348 job3: (groupid=0, jobs=1): err= 0: pid=2593639: Thu Apr 18 13:42:29 2024 00:13:27.349 read: IOPS=1882, BW=7528KiB/s (7709kB/s)(7536KiB/1001msec) 00:13:27.349 slat (nsec): min=6106, max=39182, avg=8939.58, stdev=4188.02 00:13:27.349 clat (usec): min=221, max=1521, avg=281.08, stdev=59.92 00:13:27.349 lat (usec): min=227, max=1552, avg=290.02, stdev=62.19 00:13:27.349 clat percentiles (usec): 00:13:27.349 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 247], 00:13:27.349 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:13:27.349 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 343], 95.00th=[ 392], 00:13:27.349 | 99.00th=[ 502], 99.50th=[ 553], 99.90th=[ 832], 99.95th=[ 1516], 00:13:27.349 | 99.99th=[ 1516] 00:13:27.349 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:27.349 slat (usec): min=7, max=803, avg=12.01, stdev=18.60 00:13:27.349 clat (usec): min=154, max=1503, avg=203.69, stdev=53.91 00:13:27.349 lat (usec): min=162, max=1522, avg=215.70, stdev=59.73 00:13:27.349 clat percentiles (usec): 00:13:27.349 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 174], 00:13:27.349 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 200], 00:13:27.349 | 70.00th=[ 210], 80.00th=[ 223], 90.00th=[ 253], 95.00th=[ 277], 00:13:27.349 | 99.00th=[ 375], 99.50th=[ 465], 99.90th=[ 742], 99.95th=[ 783], 00:13:27.349 | 99.99th=[ 1500] 00:13:27.349 bw ( KiB/s): min= 8632, max= 8632, per=62.68%, avg=8632.00, stdev= 0.00, samples=1 00:13:27.349 iops : min= 2158, max= 2158, avg=2158.00, stdev= 0.00, samples=1 00:13:27.349 lat (usec) : 250=59.03%, 500=40.31%, 750=0.53%, 1000=0.08% 00:13:27.349 lat (msec) : 2=0.05% 00:13:27.349 cpu : usr=3.70%, sys=4.80%, ctx=3934, majf=0, minf=1 00:13:27.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:27.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.349 issued rwts: total=1884,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:27.349 00:13:27.349 Run status group 0 (all jobs): 00:13:27.349 READ: bw=7539KiB/s (7720kB/s), 84.5KiB/s-7528KiB/s (86.6kB/s-7709kB/s), io=7848KiB (8036kB), run=1001-1041msec 00:13:27.349 WRITE: bw=13.4MiB/s (14.1MB/s), 1967KiB/s-8184KiB/s (2015kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1041msec 00:13:27.349 00:13:27.349 Disk stats (read/write): 00:13:27.349 nvme0n1: ios=54/512, merge=0/0, ticks=1687/115, in_queue=1802, util=98.00% 00:13:27.349 nvme0n2: ios=41/512, merge=0/0, ticks=1682/105, in_queue=1787, util=100.00% 00:13:27.349 nvme0n3: ios=40/512, merge=0/0, ticks=1642/119, in_queue=1761, util=99.90% 00:13:27.349 nvme0n4: ios=1559/1918, merge=0/0, ticks=1344/381, in_queue=1725, util=96.01% 00:13:27.349 13:42:29 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:27.349 [global] 00:13:27.349 thread=1 00:13:27.349 invalidate=1 00:13:27.349 rw=write 00:13:27.349 time_based=1 00:13:27.349 runtime=1 00:13:27.349 ioengine=libaio 00:13:27.349 direct=1 00:13:27.349 bs=4096 00:13:27.349 iodepth=128 00:13:27.349 norandommap=0 00:13:27.349 numjobs=1 00:13:27.349 00:13:27.349 verify_dump=1 00:13:27.349 verify_backlog=512 00:13:27.349 verify_state_save=0 00:13:27.349 do_verify=1 00:13:27.349 verify=crc32c-intel 00:13:27.349 [job0] 00:13:27.349 filename=/dev/nvme0n1 00:13:27.349 [job1] 00:13:27.349 filename=/dev/nvme0n2 00:13:27.349 [job2] 00:13:27.349 filename=/dev/nvme0n3 00:13:27.349 [job3] 00:13:27.349 filename=/dev/nvme0n4 00:13:27.349 Could not set queue depth (nvme0n1) 00:13:27.349 Could not set queue depth (nvme0n2) 00:13:27.349 Could not set queue depth (nvme0n3) 00:13:27.349 Could not set queue depth (nvme0n4) 00:13:27.349 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.349 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.349 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.349 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.349 fio-3.35 00:13:27.349 Starting 4 threads 00:13:28.721 00:13:28.721 job0: (groupid=0, jobs=1): err= 0: pid=2593871: Thu Apr 18 13:42:31 2024 00:13:28.721 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:13:28.721 slat (usec): min=2, max=16167, avg=112.17, stdev=816.77 00:13:28.721 clat (usec): min=1391, max=54416, avg=15528.27, stdev=7186.18 00:13:28.721 lat (usec): min=1398, max=54421, avg=15640.44, stdev=7249.89 00:13:28.721 clat percentiles (usec): 00:13:28.721 | 1.00th=[ 1532], 5.00th=[ 8848], 10.00th=[10290], 20.00th=[11338], 00:13:28.721 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13304], 60.00th=[14091], 00:13:28.721 | 70.00th=[16909], 80.00th=[20579], 90.00th=[25035], 95.00th=[26084], 00:13:28.721 | 99.00th=[43254], 99.50th=[47449], 99.90th=[47449], 99.95th=[54264], 00:13:28.721 | 99.99th=[54264] 00:13:28.721 write: IOPS=4237, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1008msec); 0 zone resets 00:13:28.721 slat (usec): min=3, max=12285, avg=111.57, stdev=757.32 00:13:28.721 clat (usec): min=2172, max=64480, avg=14979.71, stdev=8268.49 00:13:28.721 lat (usec): min=2178, max=64497, avg=15091.28, stdev=8323.85 00:13:28.721 clat percentiles (usec): 00:13:28.721 | 1.00th=[ 6194], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[10945], 00:13:28.721 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12387], 60.00th=[13304], 00:13:28.721 | 70.00th=[16319], 80.00th=[17957], 90.00th=[20579], 95.00th=[23725], 00:13:28.721 | 99.00th=[61080], 99.50th=[62129], 99.90th=[63177], 99.95th=[63701], 00:13:28.721 | 99.99th=[64226] 00:13:28.721 bw ( KiB/s): min=16384, max=16992, per=24.66%, avg=16688.00, stdev=429.92, samples=2 00:13:28.721 iops : min= 4096, max= 4248, avg=4172.00, stdev=107.48, samples=2 00:13:28.721 lat (msec) : 2=1.51%, 4=0.23%, 10=10.10%, 20=72.57%, 50=14.46% 00:13:28.721 lat (msec) : 100=1.14% 00:13:28.721 cpu : usr=4.97%, sys=5.36%, ctx=329, majf=0, minf=1 00:13:28.721 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:28.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.721 issued rwts: total=4096,4271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.721 job1: (groupid=0, jobs=1): err= 0: pid=2593872: Thu Apr 18 13:42:31 2024 00:13:28.721 read: IOPS=4465, BW=17.4MiB/s (18.3MB/s)(17.6MiB/1010msec) 00:13:28.721 slat (usec): min=2, max=19971, avg=119.62, stdev=783.61 00:13:28.721 clat (usec): min=674, max=81316, avg=15362.73, stdev=10739.20 00:13:28.721 lat (usec): min=8420, max=81330, avg=15482.35, stdev=10803.01 00:13:28.721 clat percentiles (usec): 00:13:28.721 | 1.00th=[ 8979], 5.00th=[10159], 10.00th=[10945], 20.00th=[11469], 00:13:28.721 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12780], 60.00th=[13042], 00:13:28.721 | 70.00th=[13435], 80.00th=[14746], 90.00th=[19268], 95.00th=[25822], 00:13:28.721 | 99.00th=[69731], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:13:28.721 | 99.99th=[81265] 00:13:28.721 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:13:28.721 slat (usec): min=4, max=7509, avg=92.50, stdev=467.58 00:13:28.721 clat (usec): min=7736, max=20130, avg=12605.88, stdev=1464.68 00:13:28.721 lat (usec): min=7752, max=20162, avg=12698.38, stdev=1478.94 00:13:28.721 clat percentiles (usec): 00:13:28.721 | 1.00th=[ 9241], 5.00th=[10552], 10.00th=[11076], 20.00th=[11469], 00:13:28.721 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12518], 60.00th=[12911], 00:13:28.721 | 70.00th=[13173], 80.00th=[13698], 90.00th=[14353], 95.00th=[15270], 00:13:28.721 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:13:28.722 | 99.99th=[20055] 00:13:28.722 bw ( KiB/s): min=16384, max=20480, per=27.24%, avg=18432.00, stdev=2896.31, samples=2 00:13:28.722 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:13:28.722 lat (usec) : 750=0.01% 00:13:28.722 lat (msec) : 10=3.45%, 20=92.09%, 50=2.85%, 100=1.59% 00:13:28.722 cpu : usr=5.15%, sys=7.93%, ctx=444, majf=0, minf=1 00:13:28.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:28.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.722 issued rwts: total=4510,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.722 job2: (groupid=0, jobs=1): err= 0: pid=2593873: Thu Apr 18 13:42:31 2024 00:13:28.722 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:13:28.722 slat (usec): min=2, max=27590, avg=145.67, stdev=1026.32 00:13:28.722 clat (usec): min=2456, max=88878, avg=19403.64, stdev=12712.10 00:13:28.722 lat (usec): min=2487, max=88894, avg=19549.31, stdev=12807.34 00:13:28.722 clat percentiles (usec): 00:13:28.722 | 1.00th=[ 5932], 5.00th=[10159], 10.00th=[11731], 20.00th=[12911], 00:13:28.722 | 30.00th=[13829], 40.00th=[14484], 50.00th=[14877], 60.00th=[15533], 00:13:28.722 | 70.00th=[16909], 80.00th=[24773], 90.00th=[31589], 95.00th=[47449], 00:13:28.722 | 99.00th=[71828], 99.50th=[71828], 99.90th=[84411], 99.95th=[85459], 00:13:28.722 | 99.99th=[88605] 00:13:28.722 write: IOPS=3590, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1002msec); 0 zone resets 00:13:28.722 slat (usec): min=3, max=18586, avg=112.81, stdev=769.68 00:13:28.722 clat (usec): min=867, max=33364, avg=15008.63, stdev=4420.04 00:13:28.722 lat (usec): min=1796, max=35115, avg=15121.43, stdev=4443.43 00:13:28.722 clat percentiles (usec): 00:13:28.722 | 1.00th=[ 4686], 5.00th=[ 8717], 10.00th=[10421], 20.00th=[12387], 00:13:28.722 | 30.00th=[13173], 40.00th=[13829], 50.00th=[14222], 60.00th=[15008], 00:13:28.722 | 70.00th=[15795], 80.00th=[17433], 90.00th=[21103], 95.00th=[24249], 00:13:28.722 | 99.00th=[28181], 99.50th=[31327], 99.90th=[33424], 99.95th=[33424], 00:13:28.722 | 99.99th=[33424] 00:13:28.722 bw ( KiB/s): min=16384, max=16384, per=24.21%, avg=16384.00, stdev= 0.00, samples=1 00:13:28.722 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:13:28.722 lat (usec) : 1000=0.01% 00:13:28.722 lat (msec) : 2=0.11%, 4=0.21%, 10=6.34%, 20=73.92%, 50=17.06% 00:13:28.722 lat (msec) : 100=2.35% 00:13:28.722 cpu : usr=2.60%, sys=7.59%, ctx=294, majf=0, minf=1 00:13:28.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:28.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.722 issued rwts: total=3584,3598,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.722 job3: (groupid=0, jobs=1): err= 0: pid=2593874: Thu Apr 18 13:42:31 2024 00:13:28.722 read: IOPS=4313, BW=16.8MiB/s (17.7MB/s)(17.0MiB/1008msec) 00:13:28.722 slat (usec): min=2, max=13779, avg=120.98, stdev=845.69 00:13:28.722 clat (usec): min=3647, max=36463, avg=15586.41, stdev=5031.37 00:13:28.722 lat (usec): min=4025, max=36477, avg=15707.39, stdev=5088.42 00:13:28.722 clat percentiles (usec): 00:13:28.722 | 1.00th=[ 6718], 5.00th=[10552], 10.00th=[11338], 20.00th=[12387], 00:13:28.722 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13829], 60.00th=[14615], 00:13:28.722 | 70.00th=[16319], 80.00th=[18482], 90.00th=[23462], 95.00th=[27657], 00:13:28.722 | 99.00th=[30802], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:13:28.722 | 99.99th=[36439] 00:13:28.722 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:13:28.722 slat (usec): min=3, max=11558, avg=91.61, stdev=618.09 00:13:28.722 clat (usec): min=1367, max=32020, avg=13040.40, stdev=4381.88 00:13:28.722 lat (usec): min=1380, max=32027, avg=13132.01, stdev=4418.45 00:13:28.722 clat percentiles (usec): 00:13:28.722 | 1.00th=[ 3818], 5.00th=[ 6456], 10.00th=[ 7570], 20.00th=[10290], 00:13:28.722 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12780], 60.00th=[13698], 00:13:28.722 | 70.00th=[14615], 80.00th=[15795], 90.00th=[17957], 95.00th=[21103], 00:13:28.722 | 99.00th=[26084], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:13:28.722 | 99.99th=[32113] 00:13:28.722 bw ( KiB/s): min=16720, max=20144, per=27.24%, avg=18432.00, stdev=2421.13, samples=2 00:13:28.722 iops : min= 4180, max= 5036, avg=4608.00, stdev=605.28, samples=2 00:13:28.722 lat (msec) : 2=0.20%, 4=0.54%, 10=10.94%, 20=76.54%, 50=11.78% 00:13:28.722 cpu : usr=4.77%, sys=6.65%, ctx=350, majf=0, minf=1 00:13:28.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:28.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.722 issued rwts: total=4348,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.722 00:13:28.722 Run status group 0 (all jobs): 00:13:28.722 READ: bw=64.0MiB/s (67.1MB/s), 14.0MiB/s-17.4MiB/s (14.7MB/s-18.3MB/s), io=64.6MiB (67.7MB), run=1002-1010msec 00:13:28.722 WRITE: bw=66.1MiB/s (69.3MB/s), 14.0MiB/s-17.9MiB/s (14.7MB/s-18.7MB/s), io=66.7MiB (70.0MB), run=1002-1010msec 00:13:28.722 00:13:28.722 Disk stats (read/write): 00:13:28.722 nvme0n1: ios=3327/3584, merge=0/0, ticks=30350/28909, in_queue=59259, util=87.11% 00:13:28.722 nvme0n2: ios=4119/4430, merge=0/0, ticks=18009/16796, in_queue=34805, util=99.19% 00:13:28.722 nvme0n3: ios=2942/3072, merge=0/0, ticks=22598/22031, in_queue=44629, util=95.72% 00:13:28.722 nvme0n4: ios=3641/3772, merge=0/0, ticks=40707/33330, in_queue=74037, util=95.37% 00:13:28.722 13:42:31 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:28.722 [global] 00:13:28.722 thread=1 00:13:28.722 invalidate=1 00:13:28.722 rw=randwrite 00:13:28.722 time_based=1 00:13:28.722 runtime=1 00:13:28.722 ioengine=libaio 00:13:28.722 direct=1 00:13:28.722 bs=4096 00:13:28.722 iodepth=128 00:13:28.722 norandommap=0 00:13:28.722 numjobs=1 00:13:28.722 00:13:28.722 verify_dump=1 00:13:28.722 verify_backlog=512 00:13:28.722 verify_state_save=0 00:13:28.722 do_verify=1 00:13:28.722 verify=crc32c-intel 00:13:28.722 [job0] 00:13:28.722 filename=/dev/nvme0n1 00:13:28.722 [job1] 00:13:28.722 filename=/dev/nvme0n2 00:13:28.722 [job2] 00:13:28.722 filename=/dev/nvme0n3 00:13:28.722 [job3] 00:13:28.722 filename=/dev/nvme0n4 00:13:28.722 Could not set queue depth (nvme0n1) 00:13:28.722 Could not set queue depth (nvme0n2) 00:13:28.722 Could not set queue depth (nvme0n3) 00:13:28.722 Could not set queue depth (nvme0n4) 00:13:28.722 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:28.722 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:28.722 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:28.722 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:28.722 fio-3.35 00:13:28.722 Starting 4 threads 00:13:30.097 00:13:30.097 job0: (groupid=0, jobs=1): err= 0: pid=2594098: Thu Apr 18 13:42:32 2024 00:13:30.097 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:13:30.097 slat (usec): min=2, max=21230, avg=153.58, stdev=1006.25 00:13:30.097 clat (usec): min=2166, max=71512, avg=21217.62, stdev=10601.82 00:13:30.097 lat (usec): min=2171, max=71616, avg=21371.20, stdev=10669.09 00:13:30.097 clat percentiles (usec): 00:13:30.097 | 1.00th=[ 3195], 5.00th=[10290], 10.00th=[10683], 20.00th=[12911], 00:13:30.097 | 30.00th=[15533], 40.00th=[18220], 50.00th=[19530], 60.00th=[21103], 00:13:30.097 | 70.00th=[23462], 80.00th=[27132], 90.00th=[33162], 95.00th=[41681], 00:13:30.097 | 99.00th=[57934], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:13:30.097 | 99.99th=[71828] 00:13:30.097 write: IOPS=3197, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1004msec); 0 zone resets 00:13:30.097 slat (usec): min=3, max=24175, avg=141.65, stdev=1029.46 00:13:30.097 clat (usec): min=3467, max=65384, avg=19112.84, stdev=11353.80 00:13:30.097 lat (usec): min=3803, max=65432, avg=19254.49, stdev=11441.47 00:13:30.097 clat percentiles (usec): 00:13:30.097 | 1.00th=[ 4359], 5.00th=[ 7701], 10.00th=[ 9241], 20.00th=[10290], 00:13:30.097 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13698], 60.00th=[17957], 00:13:30.097 | 70.00th=[21627], 80.00th=[28181], 90.00th=[33817], 95.00th=[45876], 00:13:30.097 | 99.00th=[56361], 99.50th=[58459], 99.90th=[59507], 99.95th=[62129], 00:13:30.097 | 99.99th=[65274] 00:13:30.097 bw ( KiB/s): min= 8312, max=16351, per=20.09%, avg=12331.50, stdev=5684.43, samples=2 00:13:30.097 iops : min= 2078, max= 4087, avg=3082.50, stdev=1420.58, samples=2 00:13:30.097 lat (msec) : 4=1.26%, 10=10.38%, 20=46.94%, 50=39.24%, 100=2.18% 00:13:30.097 cpu : usr=3.49%, sys=4.79%, ctx=273, majf=0, minf=1 00:13:30.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:30.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.097 issued rwts: total=3072,3210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.097 job1: (groupid=0, jobs=1): err= 0: pid=2594099: Thu Apr 18 13:42:32 2024 00:13:30.097 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:13:30.097 slat (usec): min=3, max=17857, avg=109.16, stdev=736.99 00:13:30.097 clat (usec): min=4091, max=57833, avg=14363.64, stdev=6807.97 00:13:30.097 lat (usec): min=4097, max=57839, avg=14472.80, stdev=6860.82 00:13:30.097 clat percentiles (usec): 00:13:30.097 | 1.00th=[ 5997], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10683], 00:13:30.097 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12649], 60.00th=[13042], 00:13:30.097 | 70.00th=[14746], 80.00th=[16450], 90.00th=[20579], 95.00th=[25035], 00:13:30.097 | 99.00th=[51643], 99.50th=[53740], 99.90th=[56361], 99.95th=[57934], 00:13:30.097 | 99.99th=[57934] 00:13:30.097 write: IOPS=4963, BW=19.4MiB/s (20.3MB/s)(19.4MiB/1002msec); 0 zone resets 00:13:30.097 slat (usec): min=4, max=11898, avg=90.12, stdev=499.08 00:13:30.097 clat (usec): min=1381, max=25870, avg=12262.04, stdev=4043.80 00:13:30.097 lat (usec): min=1393, max=25877, avg=12352.16, stdev=4071.57 00:13:30.097 clat percentiles (usec): 00:13:30.097 | 1.00th=[ 5342], 5.00th=[ 6063], 10.00th=[ 8291], 20.00th=[ 9634], 00:13:30.097 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:13:30.097 | 70.00th=[12256], 80.00th=[14353], 90.00th=[19268], 95.00th=[21365], 00:13:30.097 | 99.00th=[24249], 99.50th=[24511], 99.90th=[25822], 99.95th=[25822], 00:13:30.097 | 99.99th=[25822] 00:13:30.097 bw ( KiB/s): min=18155, max=20576, per=31.55%, avg=19365.50, stdev=1711.91, samples=2 00:13:30.097 iops : min= 4538, max= 5144, avg=4841.00, stdev=428.51, samples=2 00:13:30.097 lat (msec) : 2=0.10%, 10=17.94%, 20=71.92%, 50=9.47%, 100=0.56% 00:13:30.097 cpu : usr=4.90%, sys=8.59%, ctx=412, majf=0, minf=1 00:13:30.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:30.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.097 issued rwts: total=4608,4973,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.097 job2: (groupid=0, jobs=1): err= 0: pid=2594102: Thu Apr 18 13:42:32 2024 00:13:30.097 read: IOPS=2659, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1010msec) 00:13:30.097 slat (usec): min=2, max=23532, avg=181.11, stdev=1326.69 00:13:30.097 clat (usec): min=951, max=70892, avg=21408.37, stdev=12343.92 00:13:30.097 lat (usec): min=5974, max=70900, avg=21589.47, stdev=12418.03 00:13:30.097 clat percentiles (usec): 00:13:30.097 | 1.00th=[ 7111], 5.00th=[10159], 10.00th=[10814], 20.00th=[12780], 00:13:30.097 | 30.00th=[14222], 40.00th=[15139], 50.00th=[15664], 60.00th=[19006], 00:13:30.097 | 70.00th=[23725], 80.00th=[29230], 90.00th=[40633], 95.00th=[51643], 00:13:30.097 | 99.00th=[61604], 99.50th=[63701], 99.90th=[70779], 99.95th=[70779], 00:13:30.097 | 99.99th=[70779] 00:13:30.097 write: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec); 0 zone resets 00:13:30.097 slat (usec): min=3, max=19896, avg=159.78, stdev=911.33 00:13:30.097 clat (usec): min=4557, max=70895, avg=22519.64, stdev=11339.88 00:13:30.097 lat (usec): min=4564, max=70906, avg=22679.42, stdev=11419.03 00:13:30.097 clat percentiles (usec): 00:13:30.097 | 1.00th=[ 6980], 5.00th=[10290], 10.00th=[11731], 20.00th=[13304], 00:13:30.097 | 30.00th=[14353], 40.00th=[15795], 50.00th=[19792], 60.00th=[22938], 00:13:30.097 | 70.00th=[27395], 80.00th=[30540], 90.00th=[38536], 95.00th=[47973], 00:13:30.097 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55313], 99.95th=[70779], 00:13:30.097 | 99.99th=[70779] 00:13:30.097 bw ( KiB/s): min=11704, max=12830, per=19.99%, avg=12267.00, stdev=796.20, samples=2 00:13:30.097 iops : min= 2926, max= 3207, avg=3066.50, stdev=198.70, samples=2 00:13:30.097 lat (usec) : 1000=0.02% 00:13:30.097 lat (msec) : 10=3.80%, 20=51.56%, 50=38.95%, 100=5.66% 00:13:30.097 cpu : usr=1.78%, sys=3.77%, ctx=310, majf=0, minf=1 00:13:30.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:30.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.097 issued rwts: total=2686,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.097 job3: (groupid=0, jobs=1): err= 0: pid=2594103: Thu Apr 18 13:42:32 2024 00:13:30.097 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:13:30.097 slat (usec): min=2, max=24465, avg=122.98, stdev=1005.50 00:13:30.097 clat (usec): min=4989, max=47542, avg=16139.82, stdev=7927.91 00:13:30.097 lat (usec): min=4999, max=52067, avg=16262.80, stdev=7993.52 00:13:30.097 clat percentiles (usec): 00:13:30.097 | 1.00th=[ 5604], 5.00th=[ 7701], 10.00th=[ 9372], 20.00th=[11863], 00:13:30.097 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13829], 60.00th=[14484], 00:13:30.097 | 70.00th=[16909], 80.00th=[17695], 90.00th=[24249], 95.00th=[37487], 00:13:30.097 | 99.00th=[46924], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:13:30.097 | 99.99th=[47449] 00:13:30.097 write: IOPS=4226, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1004msec); 0 zone resets 00:13:30.097 slat (usec): min=3, max=19546, avg=101.78, stdev=762.04 00:13:30.097 clat (usec): min=272, max=50133, avg=14331.70, stdev=6888.08 00:13:30.097 lat (usec): min=607, max=50144, avg=14433.48, stdev=6925.78 00:13:30.097 clat percentiles (usec): 00:13:30.097 | 1.00th=[ 2835], 5.00th=[ 5932], 10.00th=[ 7701], 20.00th=[11076], 00:13:30.097 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12649], 60.00th=[13304], 00:13:30.097 | 70.00th=[14091], 80.00th=[16450], 90.00th=[23462], 95.00th=[30540], 00:13:30.097 | 99.00th=[38536], 99.50th=[39060], 99.90th=[50070], 99.95th=[50070], 00:13:30.097 | 99.99th=[50070] 00:13:30.097 bw ( KiB/s): min=16384, max=16536, per=26.82%, avg=16460.00, stdev=107.48, samples=2 00:13:30.097 iops : min= 4096, max= 4134, avg=4115.00, stdev=26.87, samples=2 00:13:30.097 lat (usec) : 500=0.01%, 750=0.01% 00:13:30.097 lat (msec) : 2=0.26%, 4=0.71%, 10=12.78%, 20=72.73%, 50=13.38% 00:13:30.097 lat (msec) : 100=0.11% 00:13:30.097 cpu : usr=3.39%, sys=5.98%, ctx=286, majf=0, minf=1 00:13:30.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:30.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.097 issued rwts: total=4096,4243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.097 00:13:30.097 Run status group 0 (all jobs): 00:13:30.098 READ: bw=55.9MiB/s (58.6MB/s), 10.4MiB/s-18.0MiB/s (10.9MB/s-18.8MB/s), io=56.5MiB (59.2MB), run=1002-1010msec 00:13:30.098 WRITE: bw=59.9MiB/s (62.9MB/s), 11.9MiB/s-19.4MiB/s (12.5MB/s-20.3MB/s), io=60.5MiB (63.5MB), run=1002-1010msec 00:13:30.098 00:13:30.098 Disk stats (read/write): 00:13:30.098 nvme0n1: ios=2612/2738, merge=0/0, ticks=25057/25065, in_queue=50122, util=97.70% 00:13:30.098 nvme0n2: ios=3584/3935, merge=0/0, ticks=27062/23532, in_queue=50594, util=83.33% 00:13:30.098 nvme0n3: ios=2229/2560, merge=0/0, ticks=28924/33947, in_queue=62871, util=99.13% 00:13:30.098 nvme0n4: ios=3072/3537, merge=0/0, ticks=31187/29690, in_queue=60877, util=89.11% 00:13:30.098 13:42:32 -- target/fio.sh@55 -- # sync 00:13:30.098 13:42:32 -- target/fio.sh@59 -- # fio_pid=2594241 00:13:30.098 13:42:32 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:30.098 13:42:32 -- target/fio.sh@61 -- # sleep 3 00:13:30.098 [global] 00:13:30.098 thread=1 00:13:30.098 invalidate=1 00:13:30.098 rw=read 00:13:30.098 time_based=1 00:13:30.098 runtime=10 00:13:30.098 ioengine=libaio 00:13:30.098 direct=1 00:13:30.098 bs=4096 00:13:30.098 iodepth=1 00:13:30.098 norandommap=1 00:13:30.098 numjobs=1 00:13:30.098 00:13:30.098 [job0] 00:13:30.098 filename=/dev/nvme0n1 00:13:30.098 [job1] 00:13:30.098 filename=/dev/nvme0n2 00:13:30.098 [job2] 00:13:30.098 filename=/dev/nvme0n3 00:13:30.098 [job3] 00:13:30.098 filename=/dev/nvme0n4 00:13:30.098 Could not set queue depth (nvme0n1) 00:13:30.098 Could not set queue depth (nvme0n2) 00:13:30.098 Could not set queue depth (nvme0n3) 00:13:30.098 Could not set queue depth (nvme0n4) 00:13:30.356 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.356 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.356 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.356 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.356 fio-3.35 00:13:30.356 Starting 4 threads 00:13:33.633 13:42:35 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:33.633 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=14864384, buflen=4096 00:13:33.633 fio: pid=2594459, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:33.633 13:42:36 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:33.633 13:42:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:33.633 13:42:36 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:33.633 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=8941568, buflen=4096 00:13:33.633 fio: pid=2594458, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:33.890 13:42:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:33.890 13:42:36 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:33.890 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=352256, buflen=4096 00:13:33.890 fio: pid=2594448, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:34.148 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=372736, buflen=4096 00:13:34.148 fio: pid=2594457, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:34.148 13:42:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.148 13:42:36 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:34.148 00:13:34.148 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2594448: Thu Apr 18 13:42:36 2024 00:13:34.148 read: IOPS=25, BW=99.4KiB/s (102kB/s)(344KiB/3462msec) 00:13:34.148 slat (usec): min=10, max=16929, avg=216.28, stdev=1812.72 00:13:34.148 clat (usec): min=469, max=41171, avg=40026.17, stdev=6138.99 00:13:34.148 lat (usec): min=498, max=58021, avg=40244.79, stdev=6435.46 00:13:34.148 clat percentiles (usec): 00:13:34.148 | 1.00th=[ 469], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:34.148 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:34.148 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:34.148 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:34.148 | 99.99th=[41157] 00:13:34.148 bw ( KiB/s): min= 96, max= 104, per=1.54%, avg=100.00, stdev= 4.38, samples=6 00:13:34.148 iops : min= 24, max= 26, avg=25.00, stdev= 1.10, samples=6 00:13:34.148 lat (usec) : 500=2.30% 00:13:34.148 lat (msec) : 50=96.55% 00:13:34.148 cpu : usr=0.12%, sys=0.00%, ctx=90, majf=0, minf=1 00:13:34.148 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.148 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.148 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.148 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.148 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2594457: Thu Apr 18 13:42:36 2024 00:13:34.148 read: IOPS=24, BW=98.4KiB/s (101kB/s)(364KiB/3701msec) 00:13:34.148 slat (nsec): min=9964, max=45062, avg=19597.78, stdev=8378.72 00:13:34.148 clat (usec): min=405, max=42108, avg=40632.03, stdev=4274.60 00:13:34.148 lat (usec): min=428, max=42140, avg=40651.65, stdev=4274.35 00:13:34.148 clat percentiles (usec): 00:13:34.148 | 1.00th=[ 408], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:34.148 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:34.148 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:13:34.148 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:34.148 | 99.99th=[42206] 00:13:34.148 bw ( KiB/s): min= 93, max= 104, per=1.50%, avg=97.86, stdev= 4.34, samples=7 00:13:34.148 iops : min= 23, max= 26, avg=24.43, stdev= 1.13, samples=7 00:13:34.149 lat (usec) : 500=1.09% 00:13:34.149 lat (msec) : 50=97.83% 00:13:34.149 cpu : usr=0.08%, sys=0.00%, ctx=98, majf=0, minf=1 00:13:34.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.149 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.149 issued rwts: total=92,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.149 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2594458: Thu Apr 18 13:42:36 2024 00:13:34.149 read: IOPS=683, BW=2734KiB/s (2799kB/s)(8732KiB/3194msec) 00:13:34.149 slat (nsec): min=6260, max=82335, avg=9805.58, stdev=7048.61 00:13:34.149 clat (usec): min=246, max=41142, avg=1450.57, stdev=6702.09 00:13:34.149 lat (usec): min=254, max=41157, avg=1460.38, stdev=6704.31 00:13:34.149 clat percentiles (usec): 00:13:34.149 | 1.00th=[ 255], 5.00th=[ 262], 10.00th=[ 265], 20.00th=[ 273], 00:13:34.149 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 293], 00:13:34.149 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 519], 95.00th=[ 644], 00:13:34.149 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:34.149 | 99.99th=[41157] 00:13:34.149 bw ( KiB/s): min= 96, max=11696, per=44.86%, avg=2904.00, stdev=4789.75, samples=6 00:13:34.149 iops : min= 24, max= 2924, avg=726.00, stdev=1197.44, samples=6 00:13:34.149 lat (usec) : 250=0.23%, 500=89.10%, 750=7.69%, 1000=0.14% 00:13:34.149 lat (msec) : 50=2.79% 00:13:34.149 cpu : usr=0.44%, sys=1.03%, ctx=2184, majf=0, minf=1 00:13:34.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.149 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.149 issued rwts: total=2184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.149 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2594459: Thu Apr 18 13:42:36 2024 00:13:34.149 read: IOPS=1252, BW=5009KiB/s (5129kB/s)(14.2MiB/2898msec) 00:13:34.149 slat (nsec): min=4681, max=81864, avg=14721.17, stdev=9699.28 00:13:34.149 clat (usec): min=277, max=41544, avg=780.17, stdev=3971.57 00:13:34.149 lat (usec): min=283, max=41578, avg=794.89, stdev=3972.65 00:13:34.149 clat percentiles (usec): 00:13:34.149 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 318], 00:13:34.149 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:13:34.149 | 70.00th=[ 367], 80.00th=[ 408], 90.00th=[ 644], 95.00th=[ 685], 00:13:34.149 | 99.00th=[ 848], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:34.149 | 99.99th=[41681] 00:13:34.149 bw ( KiB/s): min= 96, max= 9872, per=67.33%, avg=4358.40, stdev=4236.09, samples=5 00:13:34.149 iops : min= 24, max= 2468, avg=1089.60, stdev=1059.02, samples=5 00:13:34.149 lat (usec) : 500=83.31%, 750=15.43%, 1000=0.25% 00:13:34.149 lat (msec) : 4=0.03%, 50=0.96% 00:13:34.149 cpu : usr=0.52%, sys=2.35%, ctx=3630, majf=0, minf=1 00:13:34.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.149 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.149 issued rwts: total=3630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.149 00:13:34.149 Run status group 0 (all jobs): 00:13:34.149 READ: bw=6473KiB/s (6628kB/s), 98.4KiB/s-5009KiB/s (101kB/s-5129kB/s), io=23.4MiB (24.5MB), run=2898-3701msec 00:13:34.149 00:13:34.149 Disk stats (read/write): 00:13:34.149 nvme0n1: ios=128/0, merge=0/0, ticks=4483/0, in_queue=4483, util=99.37% 00:13:34.149 nvme0n2: ios=127/0, merge=0/0, ticks=4579/0, in_queue=4579, util=99.81% 00:13:34.149 nvme0n3: ios=2180/0, merge=0/0, ticks=3023/0, in_queue=3023, util=96.79% 00:13:34.149 nvme0n4: ios=3496/0, merge=0/0, ticks=2747/0, in_queue=2747, util=96.74% 00:13:34.407 13:42:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.407 13:42:37 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:34.664 13:42:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.664 13:42:37 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:34.922 13:42:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.922 13:42:37 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:35.179 13:42:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:35.179 13:42:37 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:35.437 13:42:38 -- target/fio.sh@69 -- # fio_status=0 00:13:35.437 13:42:38 -- target/fio.sh@70 -- # wait 2594241 00:13:35.437 13:42:38 -- target/fio.sh@70 -- # fio_status=4 00:13:35.437 13:42:38 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.437 13:42:38 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:35.437 13:42:38 -- common/autotest_common.sh@1205 -- # local i=0 00:13:35.437 13:42:38 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:35.437 13:42:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.437 13:42:38 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:35.437 13:42:38 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.694 13:42:38 -- common/autotest_common.sh@1217 -- # return 0 00:13:35.694 13:42:38 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:35.694 13:42:38 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:35.694 nvmf hotplug test: fio failed as expected 00:13:35.694 13:42:38 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.694 13:42:38 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:35.694 13:42:38 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:35.694 13:42:38 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:35.694 13:42:38 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:35.694 13:42:38 -- target/fio.sh@91 -- # nvmftestfini 00:13:35.694 13:42:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:35.694 13:42:38 -- nvmf/common.sh@117 -- # sync 00:13:35.694 13:42:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:35.694 13:42:38 -- nvmf/common.sh@120 -- # set +e 00:13:35.694 13:42:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:35.694 13:42:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:35.952 rmmod nvme_tcp 00:13:35.952 rmmod nvme_fabrics 00:13:35.952 rmmod nvme_keyring 00:13:35.952 13:42:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:35.952 13:42:38 -- nvmf/common.sh@124 -- # set -e 00:13:35.952 13:42:38 -- nvmf/common.sh@125 -- # return 0 00:13:35.952 13:42:38 -- nvmf/common.sh@478 -- # '[' -n 2592326 ']' 00:13:35.952 13:42:38 -- nvmf/common.sh@479 -- # killprocess 2592326 00:13:35.952 13:42:38 -- common/autotest_common.sh@936 -- # '[' -z 2592326 ']' 00:13:35.952 13:42:38 -- common/autotest_common.sh@940 -- # kill -0 2592326 00:13:35.952 13:42:38 -- common/autotest_common.sh@941 -- # uname 00:13:35.952 13:42:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:35.952 13:42:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2592326 00:13:35.952 13:42:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:35.952 13:42:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:35.952 13:42:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2592326' 00:13:35.952 killing process with pid 2592326 00:13:35.952 13:42:38 -- common/autotest_common.sh@955 -- # kill 2592326 00:13:35.952 13:42:38 -- common/autotest_common.sh@960 -- # wait 2592326 00:13:36.211 13:42:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:36.211 13:42:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:36.211 13:42:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:36.211 13:42:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.211 13:42:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:36.211 13:42:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.211 13:42:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.211 13:42:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.114 13:42:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:38.114 00:13:38.114 real 0m23.510s 00:13:38.114 user 1m22.527s 00:13:38.114 sys 0m5.958s 00:13:38.114 13:42:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:38.114 13:42:40 -- common/autotest_common.sh@10 -- # set +x 00:13:38.114 ************************************ 00:13:38.114 END TEST nvmf_fio_target 00:13:38.114 ************************************ 00:13:38.114 13:42:40 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:38.114 13:42:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:38.114 13:42:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:38.114 13:42:40 -- common/autotest_common.sh@10 -- # set +x 00:13:38.372 ************************************ 00:13:38.372 START TEST nvmf_bdevio 00:13:38.372 ************************************ 00:13:38.372 13:42:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:38.372 * Looking for test storage... 00:13:38.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.372 13:42:41 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.372 13:42:41 -- nvmf/common.sh@7 -- # uname -s 00:13:38.372 13:42:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.372 13:42:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.372 13:42:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.372 13:42:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.372 13:42:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.372 13:42:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.372 13:42:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.372 13:42:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.372 13:42:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.372 13:42:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.372 13:42:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:38.372 13:42:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:38.372 13:42:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.372 13:42:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.372 13:42:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.372 13:42:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.372 13:42:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.372 13:42:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.372 13:42:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.372 13:42:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.372 13:42:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.372 13:42:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.372 13:42:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.372 13:42:41 -- paths/export.sh@5 -- # export PATH 00:13:38.372 13:42:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.372 13:42:41 -- nvmf/common.sh@47 -- # : 0 00:13:38.372 13:42:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:38.372 13:42:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:38.372 13:42:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.372 13:42:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.372 13:42:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.372 13:42:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:38.372 13:42:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:38.372 13:42:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:38.372 13:42:41 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:38.372 13:42:41 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:38.372 13:42:41 -- target/bdevio.sh@14 -- # nvmftestinit 00:13:38.372 13:42:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:38.372 13:42:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.372 13:42:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:38.372 13:42:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:38.372 13:42:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:38.372 13:42:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.372 13:42:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.372 13:42:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.372 13:42:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:38.372 13:42:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:38.372 13:42:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:38.373 13:42:41 -- common/autotest_common.sh@10 -- # set +x 00:13:40.274 13:42:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:40.274 13:42:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:40.274 13:42:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:40.274 13:42:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:40.274 13:42:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:40.274 13:42:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:40.274 13:42:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:40.274 13:42:42 -- nvmf/common.sh@295 -- # net_devs=() 00:13:40.274 13:42:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:40.274 13:42:42 -- nvmf/common.sh@296 -- # e810=() 00:13:40.274 13:42:42 -- nvmf/common.sh@296 -- # local -ga e810 00:13:40.274 13:42:42 -- nvmf/common.sh@297 -- # x722=() 00:13:40.274 13:42:42 -- nvmf/common.sh@297 -- # local -ga x722 00:13:40.274 13:42:42 -- nvmf/common.sh@298 -- # mlx=() 00:13:40.274 13:42:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:40.274 13:42:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.274 13:42:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.274 13:42:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.274 13:42:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.274 13:42:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.274 13:42:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.274 13:42:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.274 13:42:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.274 13:42:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.274 13:42:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.274 13:42:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.274 13:42:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:40.274 13:42:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:40.275 13:42:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:40.275 13:42:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:40.275 13:42:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:40.275 13:42:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:40.275 13:42:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.275 13:42:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:40.275 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:40.275 13:42:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.275 13:42:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.275 13:42:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.275 13:42:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.275 13:42:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.275 13:42:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.275 13:42:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:40.275 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:40.275 13:42:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.275 13:42:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.275 13:42:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.275 13:42:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.275 13:42:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.275 13:42:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:40.275 13:42:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:40.275 13:42:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:40.275 13:42:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.275 13:42:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.275 13:42:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:40.275 13:42:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.275 13:42:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:40.275 Found net devices under 0000:84:00.0: cvl_0_0 00:13:40.275 13:42:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.275 13:42:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.275 13:42:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.275 13:42:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:40.275 13:42:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.275 13:42:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:40.275 Found net devices under 0000:84:00.1: cvl_0_1 00:13:40.275 13:42:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.275 13:42:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:40.275 13:42:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:40.275 13:42:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:40.275 13:42:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:40.275 13:42:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:40.275 13:42:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.275 13:42:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.275 13:42:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.275 13:42:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:40.275 13:42:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.275 13:42:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.275 13:42:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:40.275 13:42:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.275 13:42:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.275 13:42:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:40.275 13:42:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:40.275 13:42:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.275 13:42:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.275 13:42:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.531 13:42:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.531 13:42:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:40.531 13:42:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.531 13:42:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.531 13:42:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.531 13:42:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:40.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:13:40.531 00:13:40.531 --- 10.0.0.2 ping statistics --- 00:13:40.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.531 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:13:40.531 13:42:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:13:40.531 00:13:40.531 --- 10.0.0.1 ping statistics --- 00:13:40.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.532 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:13:40.532 13:42:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.532 13:42:43 -- nvmf/common.sh@411 -- # return 0 00:13:40.532 13:42:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:40.532 13:42:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.532 13:42:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:40.532 13:42:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:40.532 13:42:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.532 13:42:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:40.532 13:42:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:40.532 13:42:43 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:40.532 13:42:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:40.532 13:42:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:40.532 13:42:43 -- common/autotest_common.sh@10 -- # set +x 00:13:40.532 13:42:43 -- nvmf/common.sh@470 -- # nvmfpid=2597089 00:13:40.532 13:42:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:40.532 13:42:43 -- nvmf/common.sh@471 -- # waitforlisten 2597089 00:13:40.532 13:42:43 -- common/autotest_common.sh@817 -- # '[' -z 2597089 ']' 00:13:40.532 13:42:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.532 13:42:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:40.532 13:42:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.532 13:42:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:40.532 13:42:43 -- common/autotest_common.sh@10 -- # set +x 00:13:40.532 [2024-04-18 13:42:43.219443] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:13:40.532 [2024-04-18 13:42:43.219553] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.532 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.532 [2024-04-18 13:42:43.290963] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.826 [2024-04-18 13:42:43.411668] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.826 [2024-04-18 13:42:43.411742] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.826 [2024-04-18 13:42:43.411759] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.826 [2024-04-18 13:42:43.411773] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.826 [2024-04-18 13:42:43.411785] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.826 [2024-04-18 13:42:43.411881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:40.826 [2024-04-18 13:42:43.411936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:40.826 [2024-04-18 13:42:43.411989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:40.826 [2024-04-18 13:42:43.411992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.416 13:42:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:41.416 13:42:44 -- common/autotest_common.sh@850 -- # return 0 00:13:41.416 13:42:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:41.416 13:42:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:41.416 13:42:44 -- common/autotest_common.sh@10 -- # set +x 00:13:41.416 13:42:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.416 13:42:44 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:41.416 13:42:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.416 13:42:44 -- common/autotest_common.sh@10 -- # set +x 00:13:41.416 [2024-04-18 13:42:44.162920] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.416 13:42:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.416 13:42:44 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:41.416 13:42:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.416 13:42:44 -- common/autotest_common.sh@10 -- # set +x 00:13:41.416 Malloc0 00:13:41.416 13:42:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.416 13:42:44 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:41.416 13:42:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.416 13:42:44 -- common/autotest_common.sh@10 -- # set +x 00:13:41.416 13:42:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.416 13:42:44 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:41.416 13:42:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.416 13:42:44 -- common/autotest_common.sh@10 -- # set +x 00:13:41.416 13:42:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.416 13:42:44 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.416 13:42:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.416 13:42:44 -- common/autotest_common.sh@10 -- # set +x 00:13:41.416 [2024-04-18 13:42:44.214751] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.416 13:42:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.416 13:42:44 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:41.416 13:42:44 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:41.416 13:42:44 -- nvmf/common.sh@521 -- # config=() 00:13:41.416 13:42:44 -- nvmf/common.sh@521 -- # local subsystem config 00:13:41.416 13:42:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:41.416 13:42:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:41.416 { 00:13:41.416 "params": { 00:13:41.416 "name": "Nvme$subsystem", 00:13:41.416 "trtype": "$TEST_TRANSPORT", 00:13:41.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:41.416 "adrfam": "ipv4", 00:13:41.416 "trsvcid": "$NVMF_PORT", 00:13:41.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:41.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:41.416 "hdgst": ${hdgst:-false}, 00:13:41.416 "ddgst": ${ddgst:-false} 00:13:41.416 }, 00:13:41.416 "method": "bdev_nvme_attach_controller" 00:13:41.416 } 00:13:41.416 EOF 00:13:41.416 )") 00:13:41.416 13:42:44 -- nvmf/common.sh@543 -- # cat 00:13:41.673 13:42:44 -- nvmf/common.sh@545 -- # jq . 00:13:41.673 13:42:44 -- nvmf/common.sh@546 -- # IFS=, 00:13:41.673 13:42:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:41.673 "params": { 00:13:41.673 "name": "Nvme1", 00:13:41.673 "trtype": "tcp", 00:13:41.673 "traddr": "10.0.0.2", 00:13:41.673 "adrfam": "ipv4", 00:13:41.673 "trsvcid": "4420", 00:13:41.673 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.673 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:41.673 "hdgst": false, 00:13:41.673 "ddgst": false 00:13:41.673 }, 00:13:41.673 "method": "bdev_nvme_attach_controller" 00:13:41.673 }' 00:13:41.673 [2024-04-18 13:42:44.259643] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:13:41.673 [2024-04-18 13:42:44.259739] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2597165 ] 00:13:41.673 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.673 [2024-04-18 13:42:44.324491] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:41.673 [2024-04-18 13:42:44.437857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.674 [2024-04-18 13:42:44.437910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.674 [2024-04-18 13:42:44.437913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.968 I/O targets: 00:13:41.968 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:41.968 00:13:41.968 00:13:41.968 CUnit - A unit testing framework for C - Version 2.1-3 00:13:41.968 http://cunit.sourceforge.net/ 00:13:41.968 00:13:41.968 00:13:41.968 Suite: bdevio tests on: Nvme1n1 00:13:41.968 Test: blockdev write read block ...passed 00:13:41.968 Test: blockdev write zeroes read block ...passed 00:13:41.968 Test: blockdev write zeroes read no split ...passed 00:13:42.224 Test: blockdev write zeroes read split ...passed 00:13:42.224 Test: blockdev write zeroes read split partial ...passed 00:13:42.224 Test: blockdev reset ...[2024-04-18 13:42:44.842223] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:42.224 [2024-04-18 13:42:44.842335] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa83a0 (9): Bad file descriptor 00:13:42.224 [2024-04-18 13:42:44.856901] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:42.224 passed 00:13:42.224 Test: blockdev write read 8 blocks ...passed 00:13:42.224 Test: blockdev write read size > 128k ...passed 00:13:42.224 Test: blockdev write read invalid size ...passed 00:13:42.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:42.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:42.224 Test: blockdev write read max offset ...passed 00:13:42.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:42.224 Test: blockdev writev readv 8 blocks ...passed 00:13:42.481 Test: blockdev writev readv 30 x 1block ...passed 00:13:42.481 Test: blockdev writev readv block ...passed 00:13:42.481 Test: blockdev writev readv size > 128k ...passed 00:13:42.481 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:42.481 Test: blockdev comparev and writev ...[2024-04-18 13:42:45.117623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.481 [2024-04-18 13:42:45.117672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:42.481 [2024-04-18 13:42:45.117697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.481 [2024-04-18 13:42:45.117714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:42.481 [2024-04-18 13:42:45.118156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.481 [2024-04-18 13:42:45.118189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:42.481 [2024-04-18 13:42:45.118214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.481 [2024-04-18 13:42:45.118241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:42.481 [2024-04-18 13:42:45.118662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.481 [2024-04-18 13:42:45.118688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:42.481 [2024-04-18 13:42:45.118710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.481 [2024-04-18 13:42:45.118726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:42.481 [2024-04-18 13:42:45.119199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.481 [2024-04-18 13:42:45.119235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:42.481 [2024-04-18 13:42:45.119258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.481 [2024-04-18 13:42:45.119274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:42.481 passed 00:13:42.481 Test: blockdev nvme passthru rw ...passed 00:13:42.481 Test: blockdev nvme passthru vendor specific ...[2024-04-18 13:42:45.202767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:42.481 [2024-04-18 13:42:45.202794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:42.481 [2024-04-18 13:42:45.203128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:42.481 [2024-04-18 13:42:45.203154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:42.481 [2024-04-18 13:42:45.203463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:42.481 [2024-04-18 13:42:45.203488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:42.481 [2024-04-18 13:42:45.203786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:42.482 [2024-04-18 13:42:45.203810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:42.482 passed 00:13:42.482 Test: blockdev nvme admin passthru ...passed 00:13:42.482 Test: blockdev copy ...passed 00:13:42.482 00:13:42.482 Run Summary: Type Total Ran Passed Failed Inactive 00:13:42.482 suites 1 1 n/a 0 0 00:13:42.482 tests 23 23 23 0 0 00:13:42.482 asserts 152 152 152 0 n/a 00:13:42.482 00:13:42.482 Elapsed time = 1.279 seconds 00:13:42.738 13:42:45 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.738 13:42:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.738 13:42:45 -- common/autotest_common.sh@10 -- # set +x 00:13:42.738 13:42:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.738 13:42:45 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:42.738 13:42:45 -- target/bdevio.sh@30 -- # nvmftestfini 00:13:42.738 13:42:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:42.738 13:42:45 -- nvmf/common.sh@117 -- # sync 00:13:42.738 13:42:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:42.738 13:42:45 -- nvmf/common.sh@120 -- # set +e 00:13:42.738 13:42:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:42.738 13:42:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:42.738 rmmod nvme_tcp 00:13:42.738 rmmod nvme_fabrics 00:13:42.738 rmmod nvme_keyring 00:13:42.738 13:42:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:42.996 13:42:45 -- nvmf/common.sh@124 -- # set -e 00:13:42.996 13:42:45 -- nvmf/common.sh@125 -- # return 0 00:13:42.996 13:42:45 -- nvmf/common.sh@478 -- # '[' -n 2597089 ']' 00:13:42.996 13:42:45 -- nvmf/common.sh@479 -- # killprocess 2597089 00:13:42.996 13:42:45 -- common/autotest_common.sh@936 -- # '[' -z 2597089 ']' 00:13:42.996 13:42:45 -- common/autotest_common.sh@940 -- # kill -0 2597089 00:13:42.996 13:42:45 -- common/autotest_common.sh@941 -- # uname 00:13:42.996 13:42:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:42.996 13:42:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2597089 00:13:42.996 13:42:45 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:13:42.996 13:42:45 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:13:42.996 13:42:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2597089' 00:13:42.996 killing process with pid 2597089 00:13:42.996 13:42:45 -- common/autotest_common.sh@955 -- # kill 2597089 00:13:42.996 13:42:45 -- common/autotest_common.sh@960 -- # wait 2597089 00:13:43.256 13:42:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:43.256 13:42:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:43.256 13:42:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:43.256 13:42:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.256 13:42:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:43.256 13:42:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.256 13:42:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.256 13:42:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.157 13:42:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:45.157 00:13:45.157 real 0m6.901s 00:13:45.157 user 0m12.720s 00:13:45.157 sys 0m2.063s 00:13:45.157 13:42:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:45.157 13:42:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.157 ************************************ 00:13:45.157 END TEST nvmf_bdevio 00:13:45.157 ************************************ 00:13:45.157 13:42:47 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:13:45.157 13:42:47 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:45.157 13:42:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:45.157 13:42:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:45.157 13:42:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.415 ************************************ 00:13:45.415 START TEST nvmf_bdevio_no_huge 00:13:45.415 ************************************ 00:13:45.415 13:42:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:45.415 * Looking for test storage... 00:13:45.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.415 13:42:48 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.415 13:42:48 -- nvmf/common.sh@7 -- # uname -s 00:13:45.415 13:42:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.415 13:42:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.415 13:42:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.415 13:42:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.415 13:42:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.415 13:42:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.415 13:42:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.415 13:42:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.415 13:42:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.415 13:42:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.415 13:42:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:45.415 13:42:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:45.415 13:42:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.415 13:42:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.415 13:42:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.415 13:42:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.415 13:42:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.415 13:42:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.415 13:42:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.415 13:42:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.416 13:42:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.416 13:42:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.416 13:42:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.416 13:42:48 -- paths/export.sh@5 -- # export PATH 00:13:45.416 13:42:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.416 13:42:48 -- nvmf/common.sh@47 -- # : 0 00:13:45.416 13:42:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:45.416 13:42:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:45.416 13:42:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.416 13:42:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.416 13:42:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.416 13:42:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:45.416 13:42:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:45.416 13:42:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:45.416 13:42:48 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:45.416 13:42:48 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:45.416 13:42:48 -- target/bdevio.sh@14 -- # nvmftestinit 00:13:45.416 13:42:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:45.416 13:42:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.416 13:42:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:45.416 13:42:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:45.416 13:42:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:45.416 13:42:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.416 13:42:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.416 13:42:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.416 13:42:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:45.416 13:42:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:45.416 13:42:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:45.416 13:42:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.316 13:42:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:47.316 13:42:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:47.316 13:42:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:47.316 13:42:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:47.316 13:42:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:47.316 13:42:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:47.316 13:42:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:47.316 13:42:49 -- nvmf/common.sh@295 -- # net_devs=() 00:13:47.316 13:42:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:47.316 13:42:49 -- nvmf/common.sh@296 -- # e810=() 00:13:47.316 13:42:49 -- nvmf/common.sh@296 -- # local -ga e810 00:13:47.316 13:42:49 -- nvmf/common.sh@297 -- # x722=() 00:13:47.316 13:42:49 -- nvmf/common.sh@297 -- # local -ga x722 00:13:47.316 13:42:49 -- nvmf/common.sh@298 -- # mlx=() 00:13:47.316 13:42:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:47.316 13:42:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.316 13:42:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.316 13:42:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.316 13:42:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.316 13:42:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.316 13:42:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.316 13:42:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.316 13:42:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.316 13:42:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.316 13:42:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.316 13:42:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.316 13:42:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:47.316 13:42:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:47.316 13:42:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:47.316 13:42:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:47.316 13:42:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:47.316 13:42:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:47.316 13:42:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.316 13:42:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:47.316 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:47.316 13:42:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.316 13:42:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.316 13:42:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.316 13:42:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.316 13:42:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.316 13:42:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.316 13:42:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:47.316 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:47.316 13:42:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.316 13:42:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.316 13:42:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.316 13:42:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.316 13:42:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.316 13:42:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:47.316 13:42:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:47.316 13:42:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:47.316 13:42:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.316 13:42:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.316 13:42:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:47.316 13:42:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.316 13:42:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:47.316 Found net devices under 0000:84:00.0: cvl_0_0 00:13:47.316 13:42:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.316 13:42:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.317 13:42:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.317 13:42:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:47.317 13:42:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.317 13:42:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:47.317 Found net devices under 0000:84:00.1: cvl_0_1 00:13:47.317 13:42:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.317 13:42:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:47.317 13:42:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:47.317 13:42:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:47.317 13:42:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:47.317 13:42:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:47.317 13:42:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.317 13:42:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.317 13:42:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.317 13:42:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:47.317 13:42:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.317 13:42:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.317 13:42:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:47.317 13:42:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.317 13:42:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.317 13:42:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:47.317 13:42:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:47.317 13:42:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.317 13:42:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.317 13:42:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.317 13:42:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.317 13:42:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:47.317 13:42:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.317 13:42:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.317 13:42:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.317 13:42:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:47.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:13:47.317 00:13:47.317 --- 10.0.0.2 ping statistics --- 00:13:47.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.317 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:13:47.317 13:42:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:13:47.317 00:13:47.317 --- 10.0.0.1 ping statistics --- 00:13:47.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.317 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:13:47.317 13:42:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.317 13:42:50 -- nvmf/common.sh@411 -- # return 0 00:13:47.317 13:42:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:47.317 13:42:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.317 13:42:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:47.317 13:42:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:47.317 13:42:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.317 13:42:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:47.317 13:42:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:47.317 13:42:50 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:47.317 13:42:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:47.317 13:42:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:47.317 13:42:50 -- common/autotest_common.sh@10 -- # set +x 00:13:47.575 13:42:50 -- nvmf/common.sh@470 -- # nvmfpid=2599343 00:13:47.575 13:42:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:47.575 13:42:50 -- nvmf/common.sh@471 -- # waitforlisten 2599343 00:13:47.575 13:42:50 -- common/autotest_common.sh@817 -- # '[' -z 2599343 ']' 00:13:47.575 13:42:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.575 13:42:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:47.575 13:42:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.575 13:42:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:47.575 13:42:50 -- common/autotest_common.sh@10 -- # set +x 00:13:47.575 [2024-04-18 13:42:50.173939] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:13:47.575 [2024-04-18 13:42:50.174032] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:47.575 [2024-04-18 13:42:50.248964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.575 [2024-04-18 13:42:50.371551] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.575 [2024-04-18 13:42:50.371625] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.575 [2024-04-18 13:42:50.371655] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.575 [2024-04-18 13:42:50.371666] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.575 [2024-04-18 13:42:50.371675] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.575 [2024-04-18 13:42:50.371777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:47.575 [2024-04-18 13:42:50.371828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:47.575 [2024-04-18 13:42:50.371877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:47.575 [2024-04-18 13:42:50.371880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.508 13:42:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:48.508 13:42:51 -- common/autotest_common.sh@850 -- # return 0 00:13:48.508 13:42:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:48.508 13:42:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:48.508 13:42:51 -- common/autotest_common.sh@10 -- # set +x 00:13:48.508 13:42:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.508 13:42:51 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:48.508 13:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.508 13:42:51 -- common/autotest_common.sh@10 -- # set +x 00:13:48.508 [2024-04-18 13:42:51.172571] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.508 13:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.508 13:42:51 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:48.508 13:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.508 13:42:51 -- common/autotest_common.sh@10 -- # set +x 00:13:48.508 Malloc0 00:13:48.508 13:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.508 13:42:51 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:48.508 13:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.508 13:42:51 -- common/autotest_common.sh@10 -- # set +x 00:13:48.508 13:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.508 13:42:51 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:48.508 13:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.508 13:42:51 -- common/autotest_common.sh@10 -- # set +x 00:13:48.508 13:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.508 13:42:51 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.508 13:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.508 13:42:51 -- common/autotest_common.sh@10 -- # set +x 00:13:48.508 [2024-04-18 13:42:51.210408] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.508 13:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.508 13:42:51 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:48.508 13:42:51 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:48.508 13:42:51 -- nvmf/common.sh@521 -- # config=() 00:13:48.508 13:42:51 -- nvmf/common.sh@521 -- # local subsystem config 00:13:48.508 13:42:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:48.508 13:42:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:48.508 { 00:13:48.508 "params": { 00:13:48.508 "name": "Nvme$subsystem", 00:13:48.508 "trtype": "$TEST_TRANSPORT", 00:13:48.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:48.508 "adrfam": "ipv4", 00:13:48.508 "trsvcid": "$NVMF_PORT", 00:13:48.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:48.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:48.508 "hdgst": ${hdgst:-false}, 00:13:48.508 "ddgst": ${ddgst:-false} 00:13:48.508 }, 00:13:48.508 "method": "bdev_nvme_attach_controller" 00:13:48.508 } 00:13:48.508 EOF 00:13:48.508 )") 00:13:48.508 13:42:51 -- nvmf/common.sh@543 -- # cat 00:13:48.508 13:42:51 -- nvmf/common.sh@545 -- # jq . 00:13:48.508 13:42:51 -- nvmf/common.sh@546 -- # IFS=, 00:13:48.508 13:42:51 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:48.508 "params": { 00:13:48.508 "name": "Nvme1", 00:13:48.508 "trtype": "tcp", 00:13:48.508 "traddr": "10.0.0.2", 00:13:48.508 "adrfam": "ipv4", 00:13:48.508 "trsvcid": "4420", 00:13:48.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:48.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:48.508 "hdgst": false, 00:13:48.508 "ddgst": false 00:13:48.508 }, 00:13:48.508 "method": "bdev_nvme_attach_controller" 00:13:48.508 }' 00:13:48.508 [2024-04-18 13:42:51.253189] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:13:48.508 [2024-04-18 13:42:51.253269] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2599503 ] 00:13:48.766 [2024-04-18 13:42:51.319729] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:48.766 [2024-04-18 13:42:51.432766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.766 [2024-04-18 13:42:51.432794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.766 [2024-04-18 13:42:51.432798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.023 I/O targets: 00:13:49.023 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:49.023 00:13:49.023 00:13:49.023 CUnit - A unit testing framework for C - Version 2.1-3 00:13:49.023 http://cunit.sourceforge.net/ 00:13:49.023 00:13:49.023 00:13:49.023 Suite: bdevio tests on: Nvme1n1 00:13:49.023 Test: blockdev write read block ...passed 00:13:49.281 Test: blockdev write zeroes read block ...passed 00:13:49.281 Test: blockdev write zeroes read no split ...passed 00:13:49.281 Test: blockdev write zeroes read split ...passed 00:13:49.281 Test: blockdev write zeroes read split partial ...passed 00:13:49.281 Test: blockdev reset ...[2024-04-18 13:42:51.965806] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:49.281 [2024-04-18 13:42:51.965921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f3980 (9): Bad file descriptor 00:13:49.281 [2024-04-18 13:42:51.985600] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:49.281 passed 00:13:49.281 Test: blockdev write read 8 blocks ...passed 00:13:49.281 Test: blockdev write read size > 128k ...passed 00:13:49.281 Test: blockdev write read invalid size ...passed 00:13:49.281 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:49.281 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:49.281 Test: blockdev write read max offset ...passed 00:13:49.540 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:49.540 Test: blockdev writev readv 8 blocks ...passed 00:13:49.540 Test: blockdev writev readv 30 x 1block ...passed 00:13:49.540 Test: blockdev writev readv block ...passed 00:13:49.540 Test: blockdev writev readv size > 128k ...passed 00:13:49.540 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:49.540 Test: blockdev comparev and writev ...[2024-04-18 13:42:52.164563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:49.540 [2024-04-18 13:42:52.164598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:49.540 [2024-04-18 13:42:52.164624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:49.540 [2024-04-18 13:42:52.164642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:49.540 [2024-04-18 13:42:52.165030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:49.540 [2024-04-18 13:42:52.165056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:49.540 [2024-04-18 13:42:52.165077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:49.540 [2024-04-18 13:42:52.165094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:49.540 [2024-04-18 13:42:52.165430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:49.540 [2024-04-18 13:42:52.165455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:49.540 [2024-04-18 13:42:52.165477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:49.540 [2024-04-18 13:42:52.165493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:49.540 [2024-04-18 13:42:52.165941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:49.540 [2024-04-18 13:42:52.165965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:49.540 [2024-04-18 13:42:52.165987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:49.540 [2024-04-18 13:42:52.166003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:49.540 passed 00:13:49.540 Test: blockdev nvme passthru rw ...passed 00:13:49.540 Test: blockdev nvme passthru vendor specific ...[2024-04-18 13:42:52.248499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:49.540 [2024-04-18 13:42:52.248527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:49.540 [2024-04-18 13:42:52.248704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:49.540 [2024-04-18 13:42:52.248726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:49.540 [2024-04-18 13:42:52.248892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:49.540 [2024-04-18 13:42:52.248916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:49.540 [2024-04-18 13:42:52.249088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:49.540 [2024-04-18 13:42:52.249111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:49.540 passed 00:13:49.540 Test: blockdev nvme admin passthru ...passed 00:13:49.540 Test: blockdev copy ...passed 00:13:49.540 00:13:49.540 Run Summary: Type Total Ran Passed Failed Inactive 00:13:49.540 suites 1 1 n/a 0 0 00:13:49.540 tests 23 23 23 0 0 00:13:49.540 asserts 152 152 152 0 n/a 00:13:49.540 00:13:49.540 Elapsed time = 1.121 seconds 00:13:50.105 13:42:52 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.105 13:42:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.105 13:42:52 -- common/autotest_common.sh@10 -- # set +x 00:13:50.105 13:42:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.105 13:42:52 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:50.105 13:42:52 -- target/bdevio.sh@30 -- # nvmftestfini 00:13:50.105 13:42:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:50.105 13:42:52 -- nvmf/common.sh@117 -- # sync 00:13:50.105 13:42:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:50.105 13:42:52 -- nvmf/common.sh@120 -- # set +e 00:13:50.105 13:42:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:50.105 13:42:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:50.105 rmmod nvme_tcp 00:13:50.105 rmmod nvme_fabrics 00:13:50.105 rmmod nvme_keyring 00:13:50.105 13:42:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:50.105 13:42:52 -- nvmf/common.sh@124 -- # set -e 00:13:50.105 13:42:52 -- nvmf/common.sh@125 -- # return 0 00:13:50.105 13:42:52 -- nvmf/common.sh@478 -- # '[' -n 2599343 ']' 00:13:50.105 13:42:52 -- nvmf/common.sh@479 -- # killprocess 2599343 00:13:50.105 13:42:52 -- common/autotest_common.sh@936 -- # '[' -z 2599343 ']' 00:13:50.105 13:42:52 -- common/autotest_common.sh@940 -- # kill -0 2599343 00:13:50.105 13:42:52 -- common/autotest_common.sh@941 -- # uname 00:13:50.105 13:42:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:50.105 13:42:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2599343 00:13:50.105 13:42:52 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:13:50.105 13:42:52 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:13:50.105 13:42:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2599343' 00:13:50.105 killing process with pid 2599343 00:13:50.105 13:42:52 -- common/autotest_common.sh@955 -- # kill 2599343 00:13:50.105 13:42:52 -- common/autotest_common.sh@960 -- # wait 2599343 00:13:50.671 13:42:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:50.671 13:42:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:50.671 13:42:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:50.671 13:42:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:50.671 13:42:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:50.671 13:42:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.671 13:42:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.671 13:42:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.573 13:42:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:52.573 00:13:52.573 real 0m7.206s 00:13:52.573 user 0m14.183s 00:13:52.573 sys 0m2.514s 00:13:52.573 13:42:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:52.573 13:42:55 -- common/autotest_common.sh@10 -- # set +x 00:13:52.573 ************************************ 00:13:52.573 END TEST nvmf_bdevio_no_huge 00:13:52.573 ************************************ 00:13:52.573 13:42:55 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:52.573 13:42:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:52.573 13:42:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:52.573 13:42:55 -- common/autotest_common.sh@10 -- # set +x 00:13:52.573 ************************************ 00:13:52.573 START TEST nvmf_tls 00:13:52.573 ************************************ 00:13:52.573 13:42:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:52.832 * Looking for test storage... 00:13:52.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.832 13:42:55 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.832 13:42:55 -- nvmf/common.sh@7 -- # uname -s 00:13:52.832 13:42:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.832 13:42:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.832 13:42:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.832 13:42:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.832 13:42:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.832 13:42:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.832 13:42:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.832 13:42:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.832 13:42:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.832 13:42:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.832 13:42:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:52.832 13:42:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:52.832 13:42:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.832 13:42:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.832 13:42:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.832 13:42:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.832 13:42:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.832 13:42:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.832 13:42:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.832 13:42:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.832 13:42:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.832 13:42:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.832 13:42:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.832 13:42:55 -- paths/export.sh@5 -- # export PATH 00:13:52.832 13:42:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.832 13:42:55 -- nvmf/common.sh@47 -- # : 0 00:13:52.832 13:42:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:52.832 13:42:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:52.832 13:42:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.832 13:42:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.832 13:42:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.832 13:42:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:52.832 13:42:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:52.832 13:42:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:52.832 13:42:55 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:52.832 13:42:55 -- target/tls.sh@62 -- # nvmftestinit 00:13:52.832 13:42:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:52.832 13:42:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.832 13:42:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:52.832 13:42:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:52.832 13:42:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:52.832 13:42:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.832 13:42:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:52.832 13:42:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.832 13:42:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:52.832 13:42:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:52.832 13:42:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:52.832 13:42:55 -- common/autotest_common.sh@10 -- # set +x 00:13:54.733 13:42:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:54.733 13:42:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:54.733 13:42:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:54.733 13:42:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:54.733 13:42:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:54.733 13:42:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:54.733 13:42:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:54.733 13:42:57 -- nvmf/common.sh@295 -- # net_devs=() 00:13:54.733 13:42:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:54.733 13:42:57 -- nvmf/common.sh@296 -- # e810=() 00:13:54.733 13:42:57 -- nvmf/common.sh@296 -- # local -ga e810 00:13:54.733 13:42:57 -- nvmf/common.sh@297 -- # x722=() 00:13:54.733 13:42:57 -- nvmf/common.sh@297 -- # local -ga x722 00:13:54.733 13:42:57 -- nvmf/common.sh@298 -- # mlx=() 00:13:54.733 13:42:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:54.733 13:42:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.733 13:42:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.733 13:42:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.733 13:42:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.733 13:42:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.733 13:42:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.733 13:42:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.733 13:42:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.733 13:42:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.733 13:42:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.733 13:42:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.733 13:42:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:54.733 13:42:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:54.733 13:42:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:54.733 13:42:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:54.733 13:42:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:54.733 13:42:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:54.733 13:42:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.733 13:42:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:54.733 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:54.733 13:42:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.733 13:42:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.733 13:42:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.733 13:42:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.733 13:42:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.733 13:42:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.733 13:42:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:54.733 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:54.733 13:42:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.733 13:42:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.733 13:42:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.733 13:42:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.733 13:42:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.733 13:42:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:54.733 13:42:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:54.733 13:42:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:54.733 13:42:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.733 13:42:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.733 13:42:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:54.733 13:42:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.733 13:42:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:54.733 Found net devices under 0000:84:00.0: cvl_0_0 00:13:54.733 13:42:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.733 13:42:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.733 13:42:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.733 13:42:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:54.734 13:42:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.734 13:42:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:54.734 Found net devices under 0000:84:00.1: cvl_0_1 00:13:54.734 13:42:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.734 13:42:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:54.734 13:42:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:54.734 13:42:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:54.734 13:42:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:54.734 13:42:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:54.734 13:42:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.734 13:42:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.734 13:42:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:54.734 13:42:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:54.734 13:42:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:54.734 13:42:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:54.734 13:42:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:54.734 13:42:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:54.734 13:42:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.734 13:42:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:54.734 13:42:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:54.734 13:42:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:54.734 13:42:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:54.734 13:42:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:54.734 13:42:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:54.734 13:42:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:54.734 13:42:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:54.992 13:42:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:54.992 13:42:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:54.992 13:42:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:54.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:13:54.992 00:13:54.992 --- 10.0.0.2 ping statistics --- 00:13:54.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.992 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:13:54.992 13:42:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:13:54.992 00:13:54.992 --- 10.0.0.1 ping statistics --- 00:13:54.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.992 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:13:54.992 13:42:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.992 13:42:57 -- nvmf/common.sh@411 -- # return 0 00:13:54.992 13:42:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:54.992 13:42:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.992 13:42:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:54.992 13:42:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:54.992 13:42:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.992 13:42:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:54.992 13:42:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:54.992 13:42:57 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:54.992 13:42:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:54.992 13:42:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:54.992 13:42:57 -- common/autotest_common.sh@10 -- # set +x 00:13:54.992 13:42:57 -- nvmf/common.sh@470 -- # nvmfpid=2601601 00:13:54.992 13:42:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:54.992 13:42:57 -- nvmf/common.sh@471 -- # waitforlisten 2601601 00:13:54.992 13:42:57 -- common/autotest_common.sh@817 -- # '[' -z 2601601 ']' 00:13:54.992 13:42:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.992 13:42:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:54.992 13:42:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.992 13:42:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:54.992 13:42:57 -- common/autotest_common.sh@10 -- # set +x 00:13:54.992 [2024-04-18 13:42:57.622813] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:13:54.992 [2024-04-18 13:42:57.622879] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.992 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.992 [2024-04-18 13:42:57.688033] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.992 [2024-04-18 13:42:57.791741] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.993 [2024-04-18 13:42:57.791793] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.993 [2024-04-18 13:42:57.791817] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.993 [2024-04-18 13:42:57.791828] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.993 [2024-04-18 13:42:57.791837] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.993 [2024-04-18 13:42:57.791865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.250 13:42:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:55.250 13:42:57 -- common/autotest_common.sh@850 -- # return 0 00:13:55.250 13:42:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:55.250 13:42:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:55.250 13:42:57 -- common/autotest_common.sh@10 -- # set +x 00:13:55.250 13:42:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.250 13:42:57 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:55.250 13:42:57 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:55.508 true 00:13:55.508 13:42:58 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:55.508 13:42:58 -- target/tls.sh@73 -- # jq -r .tls_version 00:13:55.765 13:42:58 -- target/tls.sh@73 -- # version=0 00:13:55.765 13:42:58 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:55.765 13:42:58 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:55.765 13:42:58 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:55.765 13:42:58 -- target/tls.sh@81 -- # jq -r .tls_version 00:13:56.024 13:42:58 -- target/tls.sh@81 -- # version=13 00:13:56.024 13:42:58 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:56.024 13:42:58 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:56.311 13:42:59 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:56.312 13:42:59 -- target/tls.sh@89 -- # jq -r .tls_version 00:13:56.570 13:42:59 -- target/tls.sh@89 -- # version=7 00:13:56.570 13:42:59 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:56.570 13:42:59 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:56.570 13:42:59 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:56.828 13:42:59 -- target/tls.sh@96 -- # ktls=false 00:13:56.828 13:42:59 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:56.828 13:42:59 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:57.097 13:42:59 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:57.097 13:42:59 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:57.357 13:43:00 -- target/tls.sh@104 -- # ktls=true 00:13:57.357 13:43:00 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:57.357 13:43:00 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:57.615 13:43:00 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:57.615 13:43:00 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:57.873 13:43:00 -- target/tls.sh@112 -- # ktls=false 00:13:57.873 13:43:00 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:57.873 13:43:00 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:57.873 13:43:00 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:57.873 13:43:00 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:57.873 13:43:00 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:13:57.873 13:43:00 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:13:57.873 13:43:00 -- nvmf/common.sh@693 -- # digest=1 00:13:57.873 13:43:00 -- nvmf/common.sh@694 -- # python - 00:13:57.873 13:43:00 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:57.873 13:43:00 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:57.873 13:43:00 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:57.873 13:43:00 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:57.873 13:43:00 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:13:57.873 13:43:00 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:13:57.873 13:43:00 -- nvmf/common.sh@693 -- # digest=1 00:13:57.873 13:43:00 -- nvmf/common.sh@694 -- # python - 00:13:57.873 13:43:00 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:57.873 13:43:00 -- target/tls.sh@121 -- # mktemp 00:13:57.873 13:43:00 -- target/tls.sh@121 -- # key_path=/tmp/tmp.WurJC7ooUZ 00:13:58.131 13:43:00 -- target/tls.sh@122 -- # mktemp 00:13:58.131 13:43:00 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.TsE0JxKQq0 00:13:58.131 13:43:00 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:58.131 13:43:00 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:58.131 13:43:00 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.WurJC7ooUZ 00:13:58.131 13:43:00 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.TsE0JxKQq0 00:13:58.131 13:43:00 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:58.388 13:43:00 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:13:58.646 13:43:01 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.WurJC7ooUZ 00:13:58.646 13:43:01 -- target/tls.sh@49 -- # local key=/tmp/tmp.WurJC7ooUZ 00:13:58.646 13:43:01 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:58.904 [2024-04-18 13:43:01.600102] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.904 13:43:01 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:59.161 13:43:01 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:59.419 [2024-04-18 13:43:02.137527] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:59.419 [2024-04-18 13:43:02.137802] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.419 13:43:02 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:59.676 malloc0 00:13:59.677 13:43:02 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:59.934 13:43:02 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WurJC7ooUZ 00:14:00.191 [2024-04-18 13:43:02.856124] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:00.191 13:43:02 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.WurJC7ooUZ 00:14:00.191 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.383 Initializing NVMe Controllers 00:14:12.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:12.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:12.383 Initialization complete. Launching workers. 00:14:12.383 ======================================================== 00:14:12.383 Latency(us) 00:14:12.383 Device Information : IOPS MiB/s Average min max 00:14:12.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7449.89 29.10 8593.64 1266.99 9491.46 00:14:12.383 ======================================================== 00:14:12.383 Total : 7449.89 29.10 8593.64 1266.99 9491.46 00:14:12.383 00:14:12.383 13:43:12 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WurJC7ooUZ 00:14:12.383 13:43:12 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:12.383 13:43:12 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:12.383 13:43:12 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:12.383 13:43:12 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WurJC7ooUZ' 00:14:12.383 13:43:12 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:12.383 13:43:12 -- target/tls.sh@28 -- # bdevperf_pid=2603493 00:14:12.383 13:43:12 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:12.383 13:43:12 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:12.383 13:43:12 -- target/tls.sh@31 -- # waitforlisten 2603493 /var/tmp/bdevperf.sock 00:14:12.383 13:43:12 -- common/autotest_common.sh@817 -- # '[' -z 2603493 ']' 00:14:12.383 13:43:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:12.383 13:43:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:12.383 13:43:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:12.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:12.383 13:43:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:12.383 13:43:12 -- common/autotest_common.sh@10 -- # set +x 00:14:12.383 [2024-04-18 13:43:13.015380] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:14:12.383 [2024-04-18 13:43:13.015472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603493 ] 00:14:12.383 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.383 [2024-04-18 13:43:13.074841] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.383 [2024-04-18 13:43:13.179284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.383 13:43:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:12.383 13:43:13 -- common/autotest_common.sh@850 -- # return 0 00:14:12.383 13:43:13 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WurJC7ooUZ 00:14:12.383 [2024-04-18 13:43:13.505677] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:12.383 [2024-04-18 13:43:13.505789] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:12.383 TLSTESTn1 00:14:12.383 13:43:13 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:12.383 Running I/O for 10 seconds... 00:14:22.351 00:14:22.351 Latency(us) 00:14:22.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.351 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:22.351 Verification LBA range: start 0x0 length 0x2000 00:14:22.351 TLSTESTn1 : 10.02 3652.36 14.27 0.00 0.00 34981.71 9660.49 77283.93 00:14:22.351 =================================================================================================================== 00:14:22.351 Total : 3652.36 14.27 0.00 0.00 34981.71 9660.49 77283.93 00:14:22.351 0 00:14:22.351 13:43:23 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:22.351 13:43:23 -- target/tls.sh@45 -- # killprocess 2603493 00:14:22.351 13:43:23 -- common/autotest_common.sh@936 -- # '[' -z 2603493 ']' 00:14:22.351 13:43:23 -- common/autotest_common.sh@940 -- # kill -0 2603493 00:14:22.351 13:43:23 -- common/autotest_common.sh@941 -- # uname 00:14:22.351 13:43:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:22.351 13:43:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2603493 00:14:22.351 13:43:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:22.351 13:43:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:22.351 13:43:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2603493' 00:14:22.351 killing process with pid 2603493 00:14:22.351 13:43:23 -- common/autotest_common.sh@955 -- # kill 2603493 00:14:22.351 Received shutdown signal, test time was about 10.000000 seconds 00:14:22.351 00:14:22.351 Latency(us) 00:14:22.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.351 =================================================================================================================== 00:14:22.351 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.351 [2024-04-18 13:43:23.791495] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:22.351 13:43:23 -- common/autotest_common.sh@960 -- # wait 2603493 00:14:22.351 13:43:24 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TsE0JxKQq0 00:14:22.351 13:43:24 -- common/autotest_common.sh@638 -- # local es=0 00:14:22.351 13:43:24 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TsE0JxKQq0 00:14:22.351 13:43:24 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:22.351 13:43:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:22.351 13:43:24 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:22.351 13:43:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:22.351 13:43:24 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TsE0JxKQq0 00:14:22.351 13:43:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:22.351 13:43:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:22.351 13:43:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:22.351 13:43:24 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TsE0JxKQq0' 00:14:22.351 13:43:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:22.351 13:43:24 -- target/tls.sh@28 -- # bdevperf_pid=2604812 00:14:22.351 13:43:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:22.351 13:43:24 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:22.351 13:43:24 -- target/tls.sh@31 -- # waitforlisten 2604812 /var/tmp/bdevperf.sock 00:14:22.351 13:43:24 -- common/autotest_common.sh@817 -- # '[' -z 2604812 ']' 00:14:22.351 13:43:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:22.351 13:43:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:22.351 13:43:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:22.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:22.351 13:43:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:22.351 13:43:24 -- common/autotest_common.sh@10 -- # set +x 00:14:22.351 [2024-04-18 13:43:24.100028] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:14:22.351 [2024-04-18 13:43:24.100116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604812 ] 00:14:22.351 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.351 [2024-04-18 13:43:24.158954] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.351 [2024-04-18 13:43:24.263729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.351 13:43:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:22.351 13:43:24 -- common/autotest_common.sh@850 -- # return 0 00:14:22.351 13:43:24 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TsE0JxKQq0 00:14:22.351 [2024-04-18 13:43:24.602959] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:22.351 [2024-04-18 13:43:24.603076] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:22.351 [2024-04-18 13:43:24.610921] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:22.351 [2024-04-18 13:43:24.611975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fc690 (107): Transport endpoint is not connected 00:14:22.351 [2024-04-18 13:43:24.612966] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fc690 (9): Bad file descriptor 00:14:22.351 [2024-04-18 13:43:24.613965] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:22.351 [2024-04-18 13:43:24.613986] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:22.351 [2024-04-18 13:43:24.613999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:22.351 request: 00:14:22.351 { 00:14:22.351 "name": "TLSTEST", 00:14:22.351 "trtype": "tcp", 00:14:22.351 "traddr": "10.0.0.2", 00:14:22.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:22.351 "adrfam": "ipv4", 00:14:22.351 "trsvcid": "4420", 00:14:22.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.351 "psk": "/tmp/tmp.TsE0JxKQq0", 00:14:22.351 "method": "bdev_nvme_attach_controller", 00:14:22.351 "req_id": 1 00:14:22.351 } 00:14:22.351 Got JSON-RPC error response 00:14:22.351 response: 00:14:22.351 { 00:14:22.351 "code": -32602, 00:14:22.351 "message": "Invalid parameters" 00:14:22.351 } 00:14:22.351 13:43:24 -- target/tls.sh@36 -- # killprocess 2604812 00:14:22.351 13:43:24 -- common/autotest_common.sh@936 -- # '[' -z 2604812 ']' 00:14:22.351 13:43:24 -- common/autotest_common.sh@940 -- # kill -0 2604812 00:14:22.351 13:43:24 -- common/autotest_common.sh@941 -- # uname 00:14:22.351 13:43:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:22.351 13:43:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2604812 00:14:22.351 13:43:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:22.352 13:43:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:22.352 13:43:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2604812' 00:14:22.352 killing process with pid 2604812 00:14:22.352 13:43:24 -- common/autotest_common.sh@955 -- # kill 2604812 00:14:22.352 Received shutdown signal, test time was about 10.000000 seconds 00:14:22.352 00:14:22.352 Latency(us) 00:14:22.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.352 =================================================================================================================== 00:14:22.352 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:22.352 [2024-04-18 13:43:24.661230] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:22.352 13:43:24 -- common/autotest_common.sh@960 -- # wait 2604812 00:14:22.352 13:43:24 -- target/tls.sh@37 -- # return 1 00:14:22.352 13:43:24 -- common/autotest_common.sh@641 -- # es=1 00:14:22.352 13:43:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:22.352 13:43:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:22.352 13:43:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:22.352 13:43:24 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WurJC7ooUZ 00:14:22.352 13:43:24 -- common/autotest_common.sh@638 -- # local es=0 00:14:22.352 13:43:24 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WurJC7ooUZ 00:14:22.352 13:43:24 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:22.352 13:43:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:22.352 13:43:24 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:22.352 13:43:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:22.352 13:43:24 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WurJC7ooUZ 00:14:22.352 13:43:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:22.352 13:43:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:22.352 13:43:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:22.352 13:43:24 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WurJC7ooUZ' 00:14:22.352 13:43:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:22.352 13:43:24 -- target/tls.sh@28 -- # bdevperf_pid=2604834 00:14:22.352 13:43:24 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:22.352 13:43:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:22.352 13:43:24 -- target/tls.sh@31 -- # waitforlisten 2604834 /var/tmp/bdevperf.sock 00:14:22.352 13:43:24 -- common/autotest_common.sh@817 -- # '[' -z 2604834 ']' 00:14:22.352 13:43:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:22.352 13:43:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:22.352 13:43:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:22.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:22.352 13:43:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:22.352 13:43:24 -- common/autotest_common.sh@10 -- # set +x 00:14:22.352 [2024-04-18 13:43:24.966536] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:14:22.352 [2024-04-18 13:43:24.966623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604834 ] 00:14:22.352 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.352 [2024-04-18 13:43:25.024357] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.352 [2024-04-18 13:43:25.127298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.610 13:43:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:22.610 13:43:25 -- common/autotest_common.sh@850 -- # return 0 00:14:22.610 13:43:25 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.WurJC7ooUZ 00:14:22.868 [2024-04-18 13:43:25.452491] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:22.868 [2024-04-18 13:43:25.452635] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:22.868 [2024-04-18 13:43:25.463019] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:22.868 [2024-04-18 13:43:25.463053] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:22.868 [2024-04-18 13:43:25.463096] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:22.868 [2024-04-18 13:43:25.463542] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfee690 (107): Transport endpoint is not connected 00:14:22.868 [2024-04-18 13:43:25.464532] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfee690 (9): Bad file descriptor 00:14:22.868 [2024-04-18 13:43:25.465533] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:22.868 [2024-04-18 13:43:25.465555] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:22.868 [2024-04-18 13:43:25.465569] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:22.868 request: 00:14:22.868 { 00:14:22.868 "name": "TLSTEST", 00:14:22.868 "trtype": "tcp", 00:14:22.868 "traddr": "10.0.0.2", 00:14:22.868 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:22.868 "adrfam": "ipv4", 00:14:22.868 "trsvcid": "4420", 00:14:22.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.868 "psk": "/tmp/tmp.WurJC7ooUZ", 00:14:22.868 "method": "bdev_nvme_attach_controller", 00:14:22.868 "req_id": 1 00:14:22.868 } 00:14:22.868 Got JSON-RPC error response 00:14:22.868 response: 00:14:22.868 { 00:14:22.868 "code": -32602, 00:14:22.868 "message": "Invalid parameters" 00:14:22.868 } 00:14:22.868 13:43:25 -- target/tls.sh@36 -- # killprocess 2604834 00:14:22.868 13:43:25 -- common/autotest_common.sh@936 -- # '[' -z 2604834 ']' 00:14:22.868 13:43:25 -- common/autotest_common.sh@940 -- # kill -0 2604834 00:14:22.868 13:43:25 -- common/autotest_common.sh@941 -- # uname 00:14:22.868 13:43:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:22.868 13:43:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2604834 00:14:22.868 13:43:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:22.868 13:43:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:22.868 13:43:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2604834' 00:14:22.868 killing process with pid 2604834 00:14:22.868 13:43:25 -- common/autotest_common.sh@955 -- # kill 2604834 00:14:22.868 Received shutdown signal, test time was about 10.000000 seconds 00:14:22.868 00:14:22.868 Latency(us) 00:14:22.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.868 =================================================================================================================== 00:14:22.868 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:22.868 [2024-04-18 13:43:25.516301] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:22.868 13:43:25 -- common/autotest_common.sh@960 -- # wait 2604834 00:14:23.126 13:43:25 -- target/tls.sh@37 -- # return 1 00:14:23.126 13:43:25 -- common/autotest_common.sh@641 -- # es=1 00:14:23.126 13:43:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:23.126 13:43:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:23.126 13:43:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:23.126 13:43:25 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WurJC7ooUZ 00:14:23.126 13:43:25 -- common/autotest_common.sh@638 -- # local es=0 00:14:23.126 13:43:25 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WurJC7ooUZ 00:14:23.126 13:43:25 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:23.126 13:43:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:23.126 13:43:25 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:23.126 13:43:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:23.126 13:43:25 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WurJC7ooUZ 00:14:23.126 13:43:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:23.126 13:43:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:23.126 13:43:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:23.126 13:43:25 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WurJC7ooUZ' 00:14:23.126 13:43:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.126 13:43:25 -- target/tls.sh@28 -- # bdevperf_pid=2604974 00:14:23.126 13:43:25 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:23.126 13:43:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:23.126 13:43:25 -- target/tls.sh@31 -- # waitforlisten 2604974 /var/tmp/bdevperf.sock 00:14:23.126 13:43:25 -- common/autotest_common.sh@817 -- # '[' -z 2604974 ']' 00:14:23.126 13:43:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.126 13:43:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:23.126 13:43:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.126 13:43:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:23.126 13:43:25 -- common/autotest_common.sh@10 -- # set +x 00:14:23.127 [2024-04-18 13:43:25.816317] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:14:23.127 [2024-04-18 13:43:25.816408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604974 ] 00:14:23.127 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.127 [2024-04-18 13:43:25.873824] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.384 [2024-04-18 13:43:25.976882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.384 13:43:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:23.384 13:43:26 -- common/autotest_common.sh@850 -- # return 0 00:14:23.384 13:43:26 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WurJC7ooUZ 00:14:23.642 [2024-04-18 13:43:26.323294] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:23.642 [2024-04-18 13:43:26.323416] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:23.642 [2024-04-18 13:43:26.335241] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:23.642 [2024-04-18 13:43:26.335274] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:23.642 [2024-04-18 13:43:26.335317] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:23.642 [2024-04-18 13:43:26.336392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2537690 (107): Transport endpoint is not connected 00:14:23.642 [2024-04-18 13:43:26.337383] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2537690 (9): Bad file descriptor 00:14:23.642 [2024-04-18 13:43:26.338383] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:23.642 [2024-04-18 13:43:26.338406] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:23.642 [2024-04-18 13:43:26.338419] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:23.642 request: 00:14:23.642 { 00:14:23.642 "name": "TLSTEST", 00:14:23.642 "trtype": "tcp", 00:14:23.642 "traddr": "10.0.0.2", 00:14:23.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:23.642 "adrfam": "ipv4", 00:14:23.642 "trsvcid": "4420", 00:14:23.642 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:23.642 "psk": "/tmp/tmp.WurJC7ooUZ", 00:14:23.642 "method": "bdev_nvme_attach_controller", 00:14:23.642 "req_id": 1 00:14:23.642 } 00:14:23.642 Got JSON-RPC error response 00:14:23.642 response: 00:14:23.642 { 00:14:23.642 "code": -32602, 00:14:23.642 "message": "Invalid parameters" 00:14:23.642 } 00:14:23.642 13:43:26 -- target/tls.sh@36 -- # killprocess 2604974 00:14:23.642 13:43:26 -- common/autotest_common.sh@936 -- # '[' -z 2604974 ']' 00:14:23.642 13:43:26 -- common/autotest_common.sh@940 -- # kill -0 2604974 00:14:23.642 13:43:26 -- common/autotest_common.sh@941 -- # uname 00:14:23.642 13:43:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:23.642 13:43:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2604974 00:14:23.642 13:43:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:23.642 13:43:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:23.642 13:43:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2604974' 00:14:23.642 killing process with pid 2604974 00:14:23.642 13:43:26 -- common/autotest_common.sh@955 -- # kill 2604974 00:14:23.642 Received shutdown signal, test time was about 10.000000 seconds 00:14:23.642 00:14:23.642 Latency(us) 00:14:23.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.642 =================================================================================================================== 00:14:23.642 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:23.642 [2024-04-18 13:43:26.387468] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:23.642 13:43:26 -- common/autotest_common.sh@960 -- # wait 2604974 00:14:23.900 13:43:26 -- target/tls.sh@37 -- # return 1 00:14:23.900 13:43:26 -- common/autotest_common.sh@641 -- # es=1 00:14:23.900 13:43:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:23.900 13:43:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:23.900 13:43:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:23.900 13:43:26 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:23.900 13:43:26 -- common/autotest_common.sh@638 -- # local es=0 00:14:23.900 13:43:26 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:23.900 13:43:26 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:23.900 13:43:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:23.900 13:43:26 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:23.900 13:43:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:23.900 13:43:26 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:23.900 13:43:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:23.900 13:43:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:23.900 13:43:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:23.900 13:43:26 -- target/tls.sh@23 -- # psk= 00:14:23.900 13:43:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.900 13:43:26 -- target/tls.sh@28 -- # bdevperf_pid=2605107 00:14:23.900 13:43:26 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:23.900 13:43:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:23.900 13:43:26 -- target/tls.sh@31 -- # waitforlisten 2605107 /var/tmp/bdevperf.sock 00:14:23.900 13:43:26 -- common/autotest_common.sh@817 -- # '[' -z 2605107 ']' 00:14:23.900 13:43:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.900 13:43:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:23.900 13:43:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.900 13:43:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:23.900 13:43:26 -- common/autotest_common.sh@10 -- # set +x 00:14:23.900 [2024-04-18 13:43:26.690923] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:14:23.900 [2024-04-18 13:43:26.691014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605107 ] 00:14:24.158 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.158 [2024-04-18 13:43:26.749663] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.158 [2024-04-18 13:43:26.853573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.441 13:43:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:24.441 13:43:26 -- common/autotest_common.sh@850 -- # return 0 00:14:24.441 13:43:26 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:24.441 [2024-04-18 13:43:27.188510] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:24.441 [2024-04-18 13:43:27.190379] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x601fd0 (9): Bad file descriptor 00:14:24.441 [2024-04-18 13:43:27.191374] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:24.441 [2024-04-18 13:43:27.191397] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:24.441 [2024-04-18 13:43:27.191411] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:24.441 request: 00:14:24.441 { 00:14:24.441 "name": "TLSTEST", 00:14:24.441 "trtype": "tcp", 00:14:24.441 "traddr": "10.0.0.2", 00:14:24.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:24.441 "adrfam": "ipv4", 00:14:24.441 "trsvcid": "4420", 00:14:24.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.441 "method": "bdev_nvme_attach_controller", 00:14:24.441 "req_id": 1 00:14:24.441 } 00:14:24.441 Got JSON-RPC error response 00:14:24.441 response: 00:14:24.441 { 00:14:24.441 "code": -32602, 00:14:24.441 "message": "Invalid parameters" 00:14:24.441 } 00:14:24.441 13:43:27 -- target/tls.sh@36 -- # killprocess 2605107 00:14:24.441 13:43:27 -- common/autotest_common.sh@936 -- # '[' -z 2605107 ']' 00:14:24.441 13:43:27 -- common/autotest_common.sh@940 -- # kill -0 2605107 00:14:24.441 13:43:27 -- common/autotest_common.sh@941 -- # uname 00:14:24.441 13:43:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:24.441 13:43:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2605107 00:14:24.700 13:43:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:24.700 13:43:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:24.700 13:43:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2605107' 00:14:24.700 killing process with pid 2605107 00:14:24.700 13:43:27 -- common/autotest_common.sh@955 -- # kill 2605107 00:14:24.700 Received shutdown signal, test time was about 10.000000 seconds 00:14:24.700 00:14:24.700 Latency(us) 00:14:24.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.700 =================================================================================================================== 00:14:24.700 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:24.700 13:43:27 -- common/autotest_common.sh@960 -- # wait 2605107 00:14:24.700 13:43:27 -- target/tls.sh@37 -- # return 1 00:14:24.700 13:43:27 -- common/autotest_common.sh@641 -- # es=1 00:14:24.700 13:43:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:24.700 13:43:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:24.700 13:43:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:24.700 13:43:27 -- target/tls.sh@158 -- # killprocess 2601601 00:14:24.700 13:43:27 -- common/autotest_common.sh@936 -- # '[' -z 2601601 ']' 00:14:24.700 13:43:27 -- common/autotest_common.sh@940 -- # kill -0 2601601 00:14:24.700 13:43:27 -- common/autotest_common.sh@941 -- # uname 00:14:24.700 13:43:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:24.700 13:43:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2601601 00:14:24.958 13:43:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:24.958 13:43:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:24.958 13:43:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2601601' 00:14:24.958 killing process with pid 2601601 00:14:24.958 13:43:27 -- common/autotest_common.sh@955 -- # kill 2601601 00:14:24.958 [2024-04-18 13:43:27.525696] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:24.958 13:43:27 -- common/autotest_common.sh@960 -- # wait 2601601 00:14:25.216 13:43:27 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:25.216 13:43:27 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:25.216 13:43:27 -- nvmf/common.sh@691 -- # local prefix key digest 00:14:25.216 13:43:27 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:14:25.216 13:43:27 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:25.216 13:43:27 -- nvmf/common.sh@693 -- # digest=2 00:14:25.216 13:43:27 -- nvmf/common.sh@694 -- # python - 00:14:25.216 13:43:27 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:25.216 13:43:27 -- target/tls.sh@160 -- # mktemp 00:14:25.216 13:43:27 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.1Zj0dXTB8P 00:14:25.216 13:43:27 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:25.216 13:43:27 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.1Zj0dXTB8P 00:14:25.216 13:43:27 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:25.216 13:43:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:25.216 13:43:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:25.216 13:43:27 -- common/autotest_common.sh@10 -- # set +x 00:14:25.216 13:43:27 -- nvmf/common.sh@470 -- # nvmfpid=2605260 00:14:25.216 13:43:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:25.216 13:43:27 -- nvmf/common.sh@471 -- # waitforlisten 2605260 00:14:25.216 13:43:27 -- common/autotest_common.sh@817 -- # '[' -z 2605260 ']' 00:14:25.216 13:43:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.216 13:43:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:25.216 13:43:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.216 13:43:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:25.216 13:43:27 -- common/autotest_common.sh@10 -- # set +x 00:14:25.216 [2024-04-18 13:43:27.924942] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:14:25.216 [2024-04-18 13:43:27.925033] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.216 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.216 [2024-04-18 13:43:27.993140] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.474 [2024-04-18 13:43:28.107122] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.474 [2024-04-18 13:43:28.107209] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.474 [2024-04-18 13:43:28.107227] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.474 [2024-04-18 13:43:28.107240] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.474 [2024-04-18 13:43:28.107252] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.474 [2024-04-18 13:43:28.107296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.408 13:43:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:26.408 13:43:28 -- common/autotest_common.sh@850 -- # return 0 00:14:26.408 13:43:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:26.408 13:43:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:26.408 13:43:28 -- common/autotest_common.sh@10 -- # set +x 00:14:26.408 13:43:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.408 13:43:28 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.1Zj0dXTB8P 00:14:26.408 13:43:28 -- target/tls.sh@49 -- # local key=/tmp/tmp.1Zj0dXTB8P 00:14:26.408 13:43:28 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:26.408 [2024-04-18 13:43:29.178249] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.408 13:43:29 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:26.972 13:43:29 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:26.972 [2024-04-18 13:43:29.751765] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:26.972 [2024-04-18 13:43:29.752045] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.972 13:43:29 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:27.537 malloc0 00:14:27.537 13:43:30 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:27.537 13:43:30 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1Zj0dXTB8P 00:14:27.794 [2024-04-18 13:43:30.530073] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:27.795 13:43:30 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1Zj0dXTB8P 00:14:27.795 13:43:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:27.795 13:43:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:27.795 13:43:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:27.795 13:43:30 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1Zj0dXTB8P' 00:14:27.795 13:43:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:27.795 13:43:30 -- target/tls.sh@28 -- # bdevperf_pid=2605556 00:14:27.795 13:43:30 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:27.795 13:43:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:27.795 13:43:30 -- target/tls.sh@31 -- # waitforlisten 2605556 /var/tmp/bdevperf.sock 00:14:27.795 13:43:30 -- common/autotest_common.sh@817 -- # '[' -z 2605556 ']' 00:14:27.795 13:43:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:27.795 13:43:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:27.795 13:43:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:27.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:27.795 13:43:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:27.795 13:43:30 -- common/autotest_common.sh@10 -- # set +x 00:14:27.795 [2024-04-18 13:43:30.594413] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:14:27.795 [2024-04-18 13:43:30.594506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605556 ] 00:14:28.053 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.053 [2024-04-18 13:43:30.658357] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.053 [2024-04-18 13:43:30.766535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.311 13:43:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:28.311 13:43:30 -- common/autotest_common.sh@850 -- # return 0 00:14:28.311 13:43:30 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1Zj0dXTB8P 00:14:28.311 [2024-04-18 13:43:31.093019] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:28.311 [2024-04-18 13:43:31.093142] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:28.569 TLSTESTn1 00:14:28.569 13:43:31 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:28.569 Running I/O for 10 seconds... 00:14:38.535 00:14:38.535 Latency(us) 00:14:38.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.535 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:38.535 Verification LBA range: start 0x0 length 0x2000 00:14:38.535 TLSTESTn1 : 10.02 3693.22 14.43 0.00 0.00 34594.31 5534.15 86992.97 00:14:38.535 =================================================================================================================== 00:14:38.535 Total : 3693.22 14.43 0.00 0.00 34594.31 5534.15 86992.97 00:14:38.535 0 00:14:38.793 13:43:41 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:38.793 13:43:41 -- target/tls.sh@45 -- # killprocess 2605556 00:14:38.793 13:43:41 -- common/autotest_common.sh@936 -- # '[' -z 2605556 ']' 00:14:38.793 13:43:41 -- common/autotest_common.sh@940 -- # kill -0 2605556 00:14:38.793 13:43:41 -- common/autotest_common.sh@941 -- # uname 00:14:38.793 13:43:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:38.793 13:43:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2605556 00:14:38.793 13:43:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:38.793 13:43:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:38.793 13:43:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2605556' 00:14:38.793 killing process with pid 2605556 00:14:38.793 13:43:41 -- common/autotest_common.sh@955 -- # kill 2605556 00:14:38.793 Received shutdown signal, test time was about 10.000000 seconds 00:14:38.793 00:14:38.793 Latency(us) 00:14:38.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.793 =================================================================================================================== 00:14:38.793 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:38.793 [2024-04-18 13:43:41.381838] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:38.793 13:43:41 -- common/autotest_common.sh@960 -- # wait 2605556 00:14:39.050 13:43:41 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.1Zj0dXTB8P 00:14:39.050 13:43:41 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1Zj0dXTB8P 00:14:39.050 13:43:41 -- common/autotest_common.sh@638 -- # local es=0 00:14:39.050 13:43:41 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1Zj0dXTB8P 00:14:39.050 13:43:41 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:39.051 13:43:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:39.051 13:43:41 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:39.051 13:43:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:39.051 13:43:41 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1Zj0dXTB8P 00:14:39.051 13:43:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:39.051 13:43:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:39.051 13:43:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:39.051 13:43:41 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1Zj0dXTB8P' 00:14:39.051 13:43:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:39.051 13:43:41 -- target/tls.sh@28 -- # bdevperf_pid=2606873 00:14:39.051 13:43:41 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:39.051 13:43:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:39.051 13:43:41 -- target/tls.sh@31 -- # waitforlisten 2606873 /var/tmp/bdevperf.sock 00:14:39.051 13:43:41 -- common/autotest_common.sh@817 -- # '[' -z 2606873 ']' 00:14:39.051 13:43:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:39.051 13:43:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:39.051 13:43:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:39.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:39.051 13:43:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:39.051 13:43:41 -- common/autotest_common.sh@10 -- # set +x 00:14:39.051 [2024-04-18 13:43:41.695800] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:14:39.051 [2024-04-18 13:43:41.695898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2606873 ] 00:14:39.051 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.051 [2024-04-18 13:43:41.755784] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.308 [2024-04-18 13:43:41.861979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.308 13:43:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:39.308 13:43:41 -- common/autotest_common.sh@850 -- # return 0 00:14:39.308 13:43:41 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1Zj0dXTB8P 00:14:39.566 [2024-04-18 13:43:42.185883] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:39.566 [2024-04-18 13:43:42.185967] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:39.566 [2024-04-18 13:43:42.185981] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.1Zj0dXTB8P 00:14:39.566 request: 00:14:39.566 { 00:14:39.566 "name": "TLSTEST", 00:14:39.566 "trtype": "tcp", 00:14:39.566 "traddr": "10.0.0.2", 00:14:39.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:39.566 "adrfam": "ipv4", 00:14:39.566 "trsvcid": "4420", 00:14:39.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.566 "psk": "/tmp/tmp.1Zj0dXTB8P", 00:14:39.566 "method": "bdev_nvme_attach_controller", 00:14:39.566 "req_id": 1 00:14:39.566 } 00:14:39.566 Got JSON-RPC error response 00:14:39.566 response: 00:14:39.566 { 00:14:39.566 "code": -1, 00:14:39.566 "message": "Operation not permitted" 00:14:39.566 } 00:14:39.566 13:43:42 -- target/tls.sh@36 -- # killprocess 2606873 00:14:39.566 13:43:42 -- common/autotest_common.sh@936 -- # '[' -z 2606873 ']' 00:14:39.566 13:43:42 -- common/autotest_common.sh@940 -- # kill -0 2606873 00:14:39.566 13:43:42 -- common/autotest_common.sh@941 -- # uname 00:14:39.566 13:43:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:39.566 13:43:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2606873 00:14:39.566 13:43:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:39.566 13:43:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:39.566 13:43:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2606873' 00:14:39.566 killing process with pid 2606873 00:14:39.566 13:43:42 -- common/autotest_common.sh@955 -- # kill 2606873 00:14:39.566 Received shutdown signal, test time was about 10.000000 seconds 00:14:39.566 00:14:39.566 Latency(us) 00:14:39.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.566 =================================================================================================================== 00:14:39.566 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:39.566 13:43:42 -- common/autotest_common.sh@960 -- # wait 2606873 00:14:39.824 13:43:42 -- target/tls.sh@37 -- # return 1 00:14:39.824 13:43:42 -- common/autotest_common.sh@641 -- # es=1 00:14:39.824 13:43:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:39.824 13:43:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:39.824 13:43:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:39.824 13:43:42 -- target/tls.sh@174 -- # killprocess 2605260 00:14:39.824 13:43:42 -- common/autotest_common.sh@936 -- # '[' -z 2605260 ']' 00:14:39.824 13:43:42 -- common/autotest_common.sh@940 -- # kill -0 2605260 00:14:39.824 13:43:42 -- common/autotest_common.sh@941 -- # uname 00:14:39.824 13:43:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:39.824 13:43:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2605260 00:14:39.824 13:43:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:39.824 13:43:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:39.824 13:43:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2605260' 00:14:39.824 killing process with pid 2605260 00:14:39.824 13:43:42 -- common/autotest_common.sh@955 -- # kill 2605260 00:14:39.824 [2024-04-18 13:43:42.489282] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:39.824 13:43:42 -- common/autotest_common.sh@960 -- # wait 2605260 00:14:40.082 13:43:42 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:40.082 13:43:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:40.082 13:43:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:40.082 13:43:42 -- common/autotest_common.sh@10 -- # set +x 00:14:40.082 13:43:42 -- nvmf/common.sh@470 -- # nvmfpid=2607025 00:14:40.082 13:43:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:40.082 13:43:42 -- nvmf/common.sh@471 -- # waitforlisten 2607025 00:14:40.082 13:43:42 -- common/autotest_common.sh@817 -- # '[' -z 2607025 ']' 00:14:40.082 13:43:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.082 13:43:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:40.082 13:43:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.082 13:43:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:40.082 13:43:42 -- common/autotest_common.sh@10 -- # set +x 00:14:40.082 [2024-04-18 13:43:42.812496] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:14:40.082 [2024-04-18 13:43:42.812585] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.082 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.082 [2024-04-18 13:43:42.875838] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.340 [2024-04-18 13:43:42.985425] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.340 [2024-04-18 13:43:42.985503] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.340 [2024-04-18 13:43:42.985518] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.340 [2024-04-18 13:43:42.985530] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.340 [2024-04-18 13:43:42.985555] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.340 [2024-04-18 13:43:42.985587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.340 13:43:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:40.340 13:43:43 -- common/autotest_common.sh@850 -- # return 0 00:14:40.340 13:43:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:40.340 13:43:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:40.340 13:43:43 -- common/autotest_common.sh@10 -- # set +x 00:14:40.341 13:43:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.341 13:43:43 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.1Zj0dXTB8P 00:14:40.341 13:43:43 -- common/autotest_common.sh@638 -- # local es=0 00:14:40.341 13:43:43 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.1Zj0dXTB8P 00:14:40.341 13:43:43 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:14:40.341 13:43:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:40.341 13:43:43 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:14:40.341 13:43:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:40.341 13:43:43 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.1Zj0dXTB8P 00:14:40.341 13:43:43 -- target/tls.sh@49 -- # local key=/tmp/tmp.1Zj0dXTB8P 00:14:40.341 13:43:43 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:40.597 [2024-04-18 13:43:43.349204] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.597 13:43:43 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:40.854 13:43:43 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:41.111 [2024-04-18 13:43:43.826479] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:41.111 [2024-04-18 13:43:43.826734] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.111 13:43:43 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:41.369 malloc0 00:14:41.369 13:43:44 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:41.626 13:43:44 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1Zj0dXTB8P 00:14:41.883 [2024-04-18 13:43:44.559562] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:41.883 [2024-04-18 13:43:44.559606] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:41.883 [2024-04-18 13:43:44.559646] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:14:41.883 request: 00:14:41.884 { 00:14:41.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.884 "host": "nqn.2016-06.io.spdk:host1", 00:14:41.884 "psk": "/tmp/tmp.1Zj0dXTB8P", 00:14:41.884 "method": "nvmf_subsystem_add_host", 00:14:41.884 "req_id": 1 00:14:41.884 } 00:14:41.884 Got JSON-RPC error response 00:14:41.884 response: 00:14:41.884 { 00:14:41.884 "code": -32603, 00:14:41.884 "message": "Internal error" 00:14:41.884 } 00:14:41.884 13:43:44 -- common/autotest_common.sh@641 -- # es=1 00:14:41.884 13:43:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:41.884 13:43:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:41.884 13:43:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:41.884 13:43:44 -- target/tls.sh@180 -- # killprocess 2607025 00:14:41.884 13:43:44 -- common/autotest_common.sh@936 -- # '[' -z 2607025 ']' 00:14:41.884 13:43:44 -- common/autotest_common.sh@940 -- # kill -0 2607025 00:14:41.884 13:43:44 -- common/autotest_common.sh@941 -- # uname 00:14:41.884 13:43:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:41.884 13:43:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2607025 00:14:41.884 13:43:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:41.884 13:43:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:41.884 13:43:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2607025' 00:14:41.884 killing process with pid 2607025 00:14:41.884 13:43:44 -- common/autotest_common.sh@955 -- # kill 2607025 00:14:41.884 13:43:44 -- common/autotest_common.sh@960 -- # wait 2607025 00:14:42.142 13:43:44 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.1Zj0dXTB8P 00:14:42.142 13:43:44 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:42.142 13:43:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:42.142 13:43:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:42.142 13:43:44 -- common/autotest_common.sh@10 -- # set +x 00:14:42.142 13:43:44 -- nvmf/common.sh@470 -- # nvmfpid=2607315 00:14:42.142 13:43:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:42.142 13:43:44 -- nvmf/common.sh@471 -- # waitforlisten 2607315 00:14:42.142 13:43:44 -- common/autotest_common.sh@817 -- # '[' -z 2607315 ']' 00:14:42.142 13:43:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.142 13:43:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:42.142 13:43:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.142 13:43:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:42.142 13:43:44 -- common/autotest_common.sh@10 -- # set +x 00:14:42.400 [2024-04-18 13:43:44.957819] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:14:42.400 [2024-04-18 13:43:44.957917] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.400 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.400 [2024-04-18 13:43:45.025884] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.400 [2024-04-18 13:43:45.139681] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.400 [2024-04-18 13:43:45.139743] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.400 [2024-04-18 13:43:45.139766] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.400 [2024-04-18 13:43:45.139778] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.400 [2024-04-18 13:43:45.139788] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.400 [2024-04-18 13:43:45.139821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.332 13:43:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:43.332 13:43:45 -- common/autotest_common.sh@850 -- # return 0 00:14:43.332 13:43:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:43.332 13:43:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:43.332 13:43:45 -- common/autotest_common.sh@10 -- # set +x 00:14:43.332 13:43:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.332 13:43:45 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.1Zj0dXTB8P 00:14:43.332 13:43:45 -- target/tls.sh@49 -- # local key=/tmp/tmp.1Zj0dXTB8P 00:14:43.332 13:43:45 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:43.332 [2024-04-18 13:43:46.117265] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.332 13:43:46 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:43.590 13:43:46 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:43.847 [2024-04-18 13:43:46.598558] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:43.847 [2024-04-18 13:43:46.598832] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.847 13:43:46 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:44.140 malloc0 00:14:44.140 13:43:46 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:44.420 13:43:47 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1Zj0dXTB8P 00:14:44.679 [2024-04-18 13:43:47.320055] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:44.679 13:43:47 -- target/tls.sh@188 -- # bdevperf_pid=2607609 00:14:44.679 13:43:47 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:44.679 13:43:47 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:44.679 13:43:47 -- target/tls.sh@191 -- # waitforlisten 2607609 /var/tmp/bdevperf.sock 00:14:44.679 13:43:47 -- common/autotest_common.sh@817 -- # '[' -z 2607609 ']' 00:14:44.679 13:43:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.679 13:43:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:44.679 13:43:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.679 13:43:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:44.679 13:43:47 -- common/autotest_common.sh@10 -- # set +x 00:14:44.679 [2024-04-18 13:43:47.377397] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:14:44.679 [2024-04-18 13:43:47.377466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607609 ] 00:14:44.679 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.679 [2024-04-18 13:43:47.439813] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.937 [2024-04-18 13:43:47.550615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.937 13:43:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:44.937 13:43:47 -- common/autotest_common.sh@850 -- # return 0 00:14:44.937 13:43:47 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1Zj0dXTB8P 00:14:45.195 [2024-04-18 13:43:47.877382] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:45.195 [2024-04-18 13:43:47.877500] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:45.195 TLSTESTn1 00:14:45.195 13:43:47 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:14:45.760 13:43:48 -- target/tls.sh@196 -- # tgtconf='{ 00:14:45.760 "subsystems": [ 00:14:45.760 { 00:14:45.760 "subsystem": "keyring", 00:14:45.760 "config": [] 00:14:45.760 }, 00:14:45.760 { 00:14:45.760 "subsystem": "iobuf", 00:14:45.760 "config": [ 00:14:45.760 { 00:14:45.760 "method": "iobuf_set_options", 00:14:45.760 "params": { 00:14:45.760 "small_pool_count": 8192, 00:14:45.760 "large_pool_count": 1024, 00:14:45.760 "small_bufsize": 8192, 00:14:45.760 "large_bufsize": 135168 00:14:45.760 } 00:14:45.760 } 00:14:45.760 ] 00:14:45.760 }, 00:14:45.760 { 00:14:45.760 "subsystem": "sock", 00:14:45.760 "config": [ 00:14:45.760 { 00:14:45.760 "method": "sock_impl_set_options", 00:14:45.760 "params": { 00:14:45.760 "impl_name": "posix", 00:14:45.760 "recv_buf_size": 2097152, 00:14:45.760 "send_buf_size": 2097152, 00:14:45.760 "enable_recv_pipe": true, 00:14:45.760 "enable_quickack": false, 00:14:45.760 "enable_placement_id": 0, 00:14:45.760 "enable_zerocopy_send_server": true, 00:14:45.760 "enable_zerocopy_send_client": false, 00:14:45.760 "zerocopy_threshold": 0, 00:14:45.760 "tls_version": 0, 00:14:45.760 "enable_ktls": false 00:14:45.760 } 00:14:45.760 }, 00:14:45.760 { 00:14:45.760 "method": "sock_impl_set_options", 00:14:45.760 "params": { 00:14:45.760 "impl_name": "ssl", 00:14:45.760 "recv_buf_size": 4096, 00:14:45.760 "send_buf_size": 4096, 00:14:45.760 "enable_recv_pipe": true, 00:14:45.760 "enable_quickack": false, 00:14:45.760 "enable_placement_id": 0, 00:14:45.760 "enable_zerocopy_send_server": true, 00:14:45.760 "enable_zerocopy_send_client": false, 00:14:45.760 "zerocopy_threshold": 0, 00:14:45.760 "tls_version": 0, 00:14:45.760 "enable_ktls": false 00:14:45.760 } 00:14:45.760 } 00:14:45.760 ] 00:14:45.760 }, 00:14:45.760 { 00:14:45.760 "subsystem": "vmd", 00:14:45.760 "config": [] 00:14:45.760 }, 00:14:45.760 { 00:14:45.761 "subsystem": "accel", 00:14:45.761 "config": [ 00:14:45.761 { 00:14:45.761 "method": "accel_set_options", 00:14:45.761 "params": { 00:14:45.761 "small_cache_size": 128, 00:14:45.761 "large_cache_size": 16, 00:14:45.761 "task_count": 2048, 00:14:45.761 "sequence_count": 2048, 00:14:45.761 "buf_count": 2048 00:14:45.761 } 00:14:45.761 } 00:14:45.761 ] 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "subsystem": "bdev", 00:14:45.761 "config": [ 00:14:45.761 { 00:14:45.761 "method": "bdev_set_options", 00:14:45.761 "params": { 00:14:45.761 "bdev_io_pool_size": 65535, 00:14:45.761 "bdev_io_cache_size": 256, 00:14:45.761 "bdev_auto_examine": true, 00:14:45.761 "iobuf_small_cache_size": 128, 00:14:45.761 "iobuf_large_cache_size": 16 00:14:45.761 } 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "method": "bdev_raid_set_options", 00:14:45.761 "params": { 00:14:45.761 "process_window_size_kb": 1024 00:14:45.761 } 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "method": "bdev_iscsi_set_options", 00:14:45.761 "params": { 00:14:45.761 "timeout_sec": 30 00:14:45.761 } 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "method": "bdev_nvme_set_options", 00:14:45.761 "params": { 00:14:45.761 "action_on_timeout": "none", 00:14:45.761 "timeout_us": 0, 00:14:45.761 "timeout_admin_us": 0, 00:14:45.761 "keep_alive_timeout_ms": 10000, 00:14:45.761 "arbitration_burst": 0, 00:14:45.761 "low_priority_weight": 0, 00:14:45.761 "medium_priority_weight": 0, 00:14:45.761 "high_priority_weight": 0, 00:14:45.761 "nvme_adminq_poll_period_us": 10000, 00:14:45.761 "nvme_ioq_poll_period_us": 0, 00:14:45.761 "io_queue_requests": 0, 00:14:45.761 "delay_cmd_submit": true, 00:14:45.761 "transport_retry_count": 4, 00:14:45.761 "bdev_retry_count": 3, 00:14:45.761 "transport_ack_timeout": 0, 00:14:45.761 "ctrlr_loss_timeout_sec": 0, 00:14:45.761 "reconnect_delay_sec": 0, 00:14:45.761 "fast_io_fail_timeout_sec": 0, 00:14:45.761 "disable_auto_failback": false, 00:14:45.761 "generate_uuids": false, 00:14:45.761 "transport_tos": 0, 00:14:45.761 "nvme_error_stat": false, 00:14:45.761 "rdma_srq_size": 0, 00:14:45.761 "io_path_stat": false, 00:14:45.761 "allow_accel_sequence": false, 00:14:45.761 "rdma_max_cq_size": 0, 00:14:45.761 "rdma_cm_event_timeout_ms": 0, 00:14:45.761 "dhchap_digests": [ 00:14:45.761 "sha256", 00:14:45.761 "sha384", 00:14:45.761 "sha512" 00:14:45.761 ], 00:14:45.761 "dhchap_dhgroups": [ 00:14:45.761 "null", 00:14:45.761 "ffdhe2048", 00:14:45.761 "ffdhe3072", 00:14:45.761 "ffdhe4096", 00:14:45.761 "ffdhe6144", 00:14:45.761 "ffdhe8192" 00:14:45.761 ] 00:14:45.761 } 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "method": "bdev_nvme_set_hotplug", 00:14:45.761 "params": { 00:14:45.761 "period_us": 100000, 00:14:45.761 "enable": false 00:14:45.761 } 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "method": "bdev_malloc_create", 00:14:45.761 "params": { 00:14:45.761 "name": "malloc0", 00:14:45.761 "num_blocks": 8192, 00:14:45.761 "block_size": 4096, 00:14:45.761 "physical_block_size": 4096, 00:14:45.761 "uuid": "9a08a002-ca84-4318-8355-69f9f676aeec", 00:14:45.761 "optimal_io_boundary": 0 00:14:45.761 } 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "method": "bdev_wait_for_examine" 00:14:45.761 } 00:14:45.761 ] 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "subsystem": "nbd", 00:14:45.761 "config": [] 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "subsystem": "scheduler", 00:14:45.761 "config": [ 00:14:45.761 { 00:14:45.761 "method": "framework_set_scheduler", 00:14:45.761 "params": { 00:14:45.761 "name": "static" 00:14:45.761 } 00:14:45.761 } 00:14:45.761 ] 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "subsystem": "nvmf", 00:14:45.761 "config": [ 00:14:45.761 { 00:14:45.761 "method": "nvmf_set_config", 00:14:45.761 "params": { 00:14:45.761 "discovery_filter": "match_any", 00:14:45.761 "admin_cmd_passthru": { 00:14:45.761 "identify_ctrlr": false 00:14:45.761 } 00:14:45.761 } 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "method": "nvmf_set_max_subsystems", 00:14:45.761 "params": { 00:14:45.761 "max_subsystems": 1024 00:14:45.761 } 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "method": "nvmf_set_crdt", 00:14:45.761 "params": { 00:14:45.761 "crdt1": 0, 00:14:45.761 "crdt2": 0, 00:14:45.761 "crdt3": 0 00:14:45.761 } 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "method": "nvmf_create_transport", 00:14:45.761 "params": { 00:14:45.761 "trtype": "TCP", 00:14:45.761 "max_queue_depth": 128, 00:14:45.761 "max_io_qpairs_per_ctrlr": 127, 00:14:45.761 "in_capsule_data_size": 4096, 00:14:45.761 "max_io_size": 131072, 00:14:45.761 "io_unit_size": 131072, 00:14:45.761 "max_aq_depth": 128, 00:14:45.761 "num_shared_buffers": 511, 00:14:45.761 "buf_cache_size": 4294967295, 00:14:45.761 "dif_insert_or_strip": false, 00:14:45.761 "zcopy": false, 00:14:45.761 "c2h_success": false, 00:14:45.761 "sock_priority": 0, 00:14:45.761 "abort_timeout_sec": 1, 00:14:45.761 "ack_timeout": 0 00:14:45.761 } 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "method": "nvmf_create_subsystem", 00:14:45.761 "params": { 00:14:45.761 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.761 "allow_any_host": false, 00:14:45.761 "serial_number": "SPDK00000000000001", 00:14:45.761 "model_number": "SPDK bdev Controller", 00:14:45.761 "max_namespaces": 10, 00:14:45.761 "min_cntlid": 1, 00:14:45.761 "max_cntlid": 65519, 00:14:45.761 "ana_reporting": false 00:14:45.761 } 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "method": "nvmf_subsystem_add_host", 00:14:45.761 "params": { 00:14:45.761 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.761 "host": "nqn.2016-06.io.spdk:host1", 00:14:45.761 "psk": "/tmp/tmp.1Zj0dXTB8P" 00:14:45.761 } 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "method": "nvmf_subsystem_add_ns", 00:14:45.761 "params": { 00:14:45.761 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.761 "namespace": { 00:14:45.761 "nsid": 1, 00:14:45.761 "bdev_name": "malloc0", 00:14:45.761 "nguid": "9A08A002CA844318835569F9F676AEEC", 00:14:45.761 "uuid": "9a08a002-ca84-4318-8355-69f9f676aeec", 00:14:45.761 "no_auto_visible": false 00:14:45.761 } 00:14:45.761 } 00:14:45.761 }, 00:14:45.761 { 00:14:45.761 "method": "nvmf_subsystem_add_listener", 00:14:45.761 "params": { 00:14:45.761 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.761 "listen_address": { 00:14:45.761 "trtype": "TCP", 00:14:45.761 "adrfam": "IPv4", 00:14:45.761 "traddr": "10.0.0.2", 00:14:45.761 "trsvcid": "4420" 00:14:45.761 }, 00:14:45.761 "secure_channel": true 00:14:45.761 } 00:14:45.761 } 00:14:45.761 ] 00:14:45.761 } 00:14:45.761 ] 00:14:45.761 }' 00:14:45.761 13:43:48 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:46.020 13:43:48 -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:46.020 "subsystems": [ 00:14:46.020 { 00:14:46.020 "subsystem": "keyring", 00:14:46.020 "config": [] 00:14:46.020 }, 00:14:46.020 { 00:14:46.020 "subsystem": "iobuf", 00:14:46.020 "config": [ 00:14:46.020 { 00:14:46.020 "method": "iobuf_set_options", 00:14:46.020 "params": { 00:14:46.020 "small_pool_count": 8192, 00:14:46.020 "large_pool_count": 1024, 00:14:46.020 "small_bufsize": 8192, 00:14:46.020 "large_bufsize": 135168 00:14:46.020 } 00:14:46.020 } 00:14:46.020 ] 00:14:46.020 }, 00:14:46.020 { 00:14:46.020 "subsystem": "sock", 00:14:46.020 "config": [ 00:14:46.020 { 00:14:46.020 "method": "sock_impl_set_options", 00:14:46.020 "params": { 00:14:46.020 "impl_name": "posix", 00:14:46.020 "recv_buf_size": 2097152, 00:14:46.020 "send_buf_size": 2097152, 00:14:46.020 "enable_recv_pipe": true, 00:14:46.020 "enable_quickack": false, 00:14:46.020 "enable_placement_id": 0, 00:14:46.020 "enable_zerocopy_send_server": true, 00:14:46.020 "enable_zerocopy_send_client": false, 00:14:46.020 "zerocopy_threshold": 0, 00:14:46.020 "tls_version": 0, 00:14:46.020 "enable_ktls": false 00:14:46.020 } 00:14:46.020 }, 00:14:46.020 { 00:14:46.020 "method": "sock_impl_set_options", 00:14:46.020 "params": { 00:14:46.020 "impl_name": "ssl", 00:14:46.020 "recv_buf_size": 4096, 00:14:46.020 "send_buf_size": 4096, 00:14:46.020 "enable_recv_pipe": true, 00:14:46.020 "enable_quickack": false, 00:14:46.020 "enable_placement_id": 0, 00:14:46.020 "enable_zerocopy_send_server": true, 00:14:46.020 "enable_zerocopy_send_client": false, 00:14:46.020 "zerocopy_threshold": 0, 00:14:46.020 "tls_version": 0, 00:14:46.020 "enable_ktls": false 00:14:46.020 } 00:14:46.020 } 00:14:46.020 ] 00:14:46.020 }, 00:14:46.020 { 00:14:46.020 "subsystem": "vmd", 00:14:46.020 "config": [] 00:14:46.020 }, 00:14:46.020 { 00:14:46.020 "subsystem": "accel", 00:14:46.020 "config": [ 00:14:46.020 { 00:14:46.020 "method": "accel_set_options", 00:14:46.020 "params": { 00:14:46.020 "small_cache_size": 128, 00:14:46.020 "large_cache_size": 16, 00:14:46.020 "task_count": 2048, 00:14:46.020 "sequence_count": 2048, 00:14:46.020 "buf_count": 2048 00:14:46.020 } 00:14:46.020 } 00:14:46.020 ] 00:14:46.020 }, 00:14:46.020 { 00:14:46.020 "subsystem": "bdev", 00:14:46.020 "config": [ 00:14:46.020 { 00:14:46.020 "method": "bdev_set_options", 00:14:46.020 "params": { 00:14:46.020 "bdev_io_pool_size": 65535, 00:14:46.020 "bdev_io_cache_size": 256, 00:14:46.020 "bdev_auto_examine": true, 00:14:46.020 "iobuf_small_cache_size": 128, 00:14:46.020 "iobuf_large_cache_size": 16 00:14:46.020 } 00:14:46.020 }, 00:14:46.020 { 00:14:46.020 "method": "bdev_raid_set_options", 00:14:46.020 "params": { 00:14:46.020 "process_window_size_kb": 1024 00:14:46.020 } 00:14:46.020 }, 00:14:46.020 { 00:14:46.020 "method": "bdev_iscsi_set_options", 00:14:46.020 "params": { 00:14:46.020 "timeout_sec": 30 00:14:46.020 } 00:14:46.020 }, 00:14:46.020 { 00:14:46.020 "method": "bdev_nvme_set_options", 00:14:46.020 "params": { 00:14:46.020 "action_on_timeout": "none", 00:14:46.020 "timeout_us": 0, 00:14:46.020 "timeout_admin_us": 0, 00:14:46.020 "keep_alive_timeout_ms": 10000, 00:14:46.020 "arbitration_burst": 0, 00:14:46.020 "low_priority_weight": 0, 00:14:46.020 "medium_priority_weight": 0, 00:14:46.020 "high_priority_weight": 0, 00:14:46.020 "nvme_adminq_poll_period_us": 10000, 00:14:46.020 "nvme_ioq_poll_period_us": 0, 00:14:46.020 "io_queue_requests": 512, 00:14:46.020 "delay_cmd_submit": true, 00:14:46.020 "transport_retry_count": 4, 00:14:46.020 "bdev_retry_count": 3, 00:14:46.020 "transport_ack_timeout": 0, 00:14:46.020 "ctrlr_loss_timeout_sec": 0, 00:14:46.020 "reconnect_delay_sec": 0, 00:14:46.020 "fast_io_fail_timeout_sec": 0, 00:14:46.020 "disable_auto_failback": false, 00:14:46.020 "generate_uuids": false, 00:14:46.020 "transport_tos": 0, 00:14:46.020 "nvme_error_stat": false, 00:14:46.020 "rdma_srq_size": 0, 00:14:46.020 "io_path_stat": false, 00:14:46.020 "allow_accel_sequence": false, 00:14:46.020 "rdma_max_cq_size": 0, 00:14:46.020 "rdma_cm_event_timeout_ms": 0, 00:14:46.020 "dhchap_digests": [ 00:14:46.020 "sha256", 00:14:46.020 "sha384", 00:14:46.020 "sha512" 00:14:46.020 ], 00:14:46.020 "dhchap_dhgroups": [ 00:14:46.020 "null", 00:14:46.020 "ffdhe2048", 00:14:46.020 "ffdhe3072", 00:14:46.020 "ffdhe4096", 00:14:46.020 "ffdhe6144", 00:14:46.020 "ffdhe8192" 00:14:46.020 ] 00:14:46.020 } 00:14:46.020 }, 00:14:46.020 { 00:14:46.020 "method": "bdev_nvme_attach_controller", 00:14:46.020 "params": { 00:14:46.020 "name": "TLSTEST", 00:14:46.020 "trtype": "TCP", 00:14:46.020 "adrfam": "IPv4", 00:14:46.020 "traddr": "10.0.0.2", 00:14:46.020 "trsvcid": "4420", 00:14:46.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.020 "prchk_reftag": false, 00:14:46.020 "prchk_guard": false, 00:14:46.020 "ctrlr_loss_timeout_sec": 0, 00:14:46.020 "reconnect_delay_sec": 0, 00:14:46.020 "fast_io_fail_timeout_sec": 0, 00:14:46.020 "psk": "/tmp/tmp.1Zj0dXTB8P", 00:14:46.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:46.020 "hdgst": false, 00:14:46.020 "ddgst": false 00:14:46.020 } 00:14:46.020 }, 00:14:46.020 { 00:14:46.020 "method": "bdev_nvme_set_hotplug", 00:14:46.020 "params": { 00:14:46.020 "period_us": 100000, 00:14:46.020 "enable": false 00:14:46.020 } 00:14:46.020 }, 00:14:46.020 { 00:14:46.020 "method": "bdev_wait_for_examine" 00:14:46.020 } 00:14:46.020 ] 00:14:46.020 }, 00:14:46.020 { 00:14:46.020 "subsystem": "nbd", 00:14:46.020 "config": [] 00:14:46.020 } 00:14:46.020 ] 00:14:46.020 }' 00:14:46.020 13:43:48 -- target/tls.sh@199 -- # killprocess 2607609 00:14:46.020 13:43:48 -- common/autotest_common.sh@936 -- # '[' -z 2607609 ']' 00:14:46.020 13:43:48 -- common/autotest_common.sh@940 -- # kill -0 2607609 00:14:46.020 13:43:48 -- common/autotest_common.sh@941 -- # uname 00:14:46.020 13:43:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:46.020 13:43:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2607609 00:14:46.020 13:43:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:46.020 13:43:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:46.020 13:43:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2607609' 00:14:46.020 killing process with pid 2607609 00:14:46.020 13:43:48 -- common/autotest_common.sh@955 -- # kill 2607609 00:14:46.020 Received shutdown signal, test time was about 10.000000 seconds 00:14:46.020 00:14:46.020 Latency(us) 00:14:46.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.020 =================================================================================================================== 00:14:46.020 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:46.020 [2024-04-18 13:43:48.616781] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:46.020 13:43:48 -- common/autotest_common.sh@960 -- # wait 2607609 00:14:46.279 13:43:48 -- target/tls.sh@200 -- # killprocess 2607315 00:14:46.279 13:43:48 -- common/autotest_common.sh@936 -- # '[' -z 2607315 ']' 00:14:46.279 13:43:48 -- common/autotest_common.sh@940 -- # kill -0 2607315 00:14:46.279 13:43:48 -- common/autotest_common.sh@941 -- # uname 00:14:46.279 13:43:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:46.279 13:43:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2607315 00:14:46.279 13:43:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:46.279 13:43:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:46.279 13:43:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2607315' 00:14:46.279 killing process with pid 2607315 00:14:46.279 13:43:48 -- common/autotest_common.sh@955 -- # kill 2607315 00:14:46.279 [2024-04-18 13:43:48.912351] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:46.279 13:43:48 -- common/autotest_common.sh@960 -- # wait 2607315 00:14:46.538 13:43:49 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:46.538 13:43:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:46.538 13:43:49 -- target/tls.sh@203 -- # echo '{ 00:14:46.538 "subsystems": [ 00:14:46.538 { 00:14:46.538 "subsystem": "keyring", 00:14:46.538 "config": [] 00:14:46.538 }, 00:14:46.538 { 00:14:46.538 "subsystem": "iobuf", 00:14:46.538 "config": [ 00:14:46.538 { 00:14:46.538 "method": "iobuf_set_options", 00:14:46.538 "params": { 00:14:46.538 "small_pool_count": 8192, 00:14:46.538 "large_pool_count": 1024, 00:14:46.538 "small_bufsize": 8192, 00:14:46.538 "large_bufsize": 135168 00:14:46.538 } 00:14:46.538 } 00:14:46.538 ] 00:14:46.538 }, 00:14:46.538 { 00:14:46.538 "subsystem": "sock", 00:14:46.538 "config": [ 00:14:46.538 { 00:14:46.538 "method": "sock_impl_set_options", 00:14:46.538 "params": { 00:14:46.538 "impl_name": "posix", 00:14:46.538 "recv_buf_size": 2097152, 00:14:46.538 "send_buf_size": 2097152, 00:14:46.538 "enable_recv_pipe": true, 00:14:46.538 "enable_quickack": false, 00:14:46.538 "enable_placement_id": 0, 00:14:46.538 "enable_zerocopy_send_server": true, 00:14:46.538 "enable_zerocopy_send_client": false, 00:14:46.538 "zerocopy_threshold": 0, 00:14:46.538 "tls_version": 0, 00:14:46.538 "enable_ktls": false 00:14:46.538 } 00:14:46.538 }, 00:14:46.538 { 00:14:46.538 "method": "sock_impl_set_options", 00:14:46.538 "params": { 00:14:46.538 "impl_name": "ssl", 00:14:46.538 "recv_buf_size": 4096, 00:14:46.538 "send_buf_size": 4096, 00:14:46.538 "enable_recv_pipe": true, 00:14:46.538 "enable_quickack": false, 00:14:46.538 "enable_placement_id": 0, 00:14:46.538 "enable_zerocopy_send_server": true, 00:14:46.538 "enable_zerocopy_send_client": false, 00:14:46.538 "zerocopy_threshold": 0, 00:14:46.538 "tls_version": 0, 00:14:46.538 "enable_ktls": false 00:14:46.538 } 00:14:46.538 } 00:14:46.538 ] 00:14:46.538 }, 00:14:46.538 { 00:14:46.538 "subsystem": "vmd", 00:14:46.538 "config": [] 00:14:46.538 }, 00:14:46.538 { 00:14:46.538 "subsystem": "accel", 00:14:46.538 "config": [ 00:14:46.538 { 00:14:46.538 "method": "accel_set_options", 00:14:46.538 "params": { 00:14:46.538 "small_cache_size": 128, 00:14:46.538 "large_cache_size": 16, 00:14:46.538 "task_count": 2048, 00:14:46.538 "sequence_count": 2048, 00:14:46.538 "buf_count": 2048 00:14:46.538 } 00:14:46.538 } 00:14:46.538 ] 00:14:46.538 }, 00:14:46.538 { 00:14:46.538 "subsystem": "bdev", 00:14:46.538 "config": [ 00:14:46.538 { 00:14:46.538 "method": "bdev_set_options", 00:14:46.538 "params": { 00:14:46.538 "bdev_io_pool_size": 65535, 00:14:46.538 "bdev_io_cache_size": 256, 00:14:46.538 "bdev_auto_examine": true, 00:14:46.538 "iobuf_small_cache_size": 128, 00:14:46.538 "iobuf_large_cache_size": 16 00:14:46.538 } 00:14:46.538 }, 00:14:46.538 { 00:14:46.538 "method": "bdev_raid_set_options", 00:14:46.538 "params": { 00:14:46.538 "process_window_size_kb": 1024 00:14:46.538 } 00:14:46.538 }, 00:14:46.538 { 00:14:46.538 "method": "bdev_iscsi_set_options", 00:14:46.538 "params": { 00:14:46.538 "timeout_sec": 30 00:14:46.538 } 00:14:46.538 }, 00:14:46.538 { 00:14:46.538 "method": "bdev_nvme_set_options", 00:14:46.538 "params": { 00:14:46.538 "action_on_timeout": "none", 00:14:46.538 "timeout_us": 0, 00:14:46.538 "timeout_admin_us": 0, 00:14:46.538 "keep_alive_timeout_ms": 10000, 00:14:46.538 "arbitration_burst": 0, 00:14:46.538 "low_priority_weight": 0, 00:14:46.538 "medium_priority_weight": 0, 00:14:46.538 "high_priority_weight": 0, 00:14:46.538 "nvme_adminq_poll_period_us": 10000, 00:14:46.538 "nvme_ioq_poll_period_us": 0, 00:14:46.538 "io_queue_requests": 0, 00:14:46.538 "delay_cmd_submit": true, 00:14:46.538 "transport_retry_count": 4, 00:14:46.538 "bdev_retry_count": 3, 00:14:46.538 "transport_ack_timeout": 0, 00:14:46.538 "ctrlr_loss_timeout_sec": 0, 00:14:46.538 "reconnect_delay_sec": 0, 00:14:46.538 "fast_io_fail_timeout_sec": 0, 00:14:46.538 "disable_auto_failback": false, 00:14:46.538 "generate_uuids": false, 00:14:46.538 "transport_tos": 0, 00:14:46.538 "nvme_error_stat": false, 00:14:46.538 "rdma_srq_size": 0, 00:14:46.538 "io_path_stat": false, 00:14:46.538 "allow_accel_sequence": false, 00:14:46.538 "rdma_max_cq_size": 0, 00:14:46.538 "rdma_cm_event_timeout_ms": 0, 00:14:46.539 "dhchap_digests": [ 00:14:46.539 "sha256", 00:14:46.539 "sha384", 00:14:46.539 "sha512" 00:14:46.539 ], 00:14:46.539 "dhchap_dhgroups": [ 00:14:46.539 "null", 00:14:46.539 "ffdhe2048", 00:14:46.539 "ffdhe3072", 00:14:46.539 "ffdhe4096", 00:14:46.539 "ffdhe6144", 00:14:46.539 "ffdhe8192" 00:14:46.539 ] 00:14:46.539 } 00:14:46.539 }, 00:14:46.539 { 00:14:46.539 "method": "bdev_nvme_set_hotplug", 00:14:46.539 "params": { 00:14:46.539 "period_us": 100000, 00:14:46.539 "enable": false 00:14:46.539 } 00:14:46.539 }, 00:14:46.539 { 00:14:46.539 "method": "bdev_malloc_create", 00:14:46.539 "params": { 00:14:46.539 "name": "malloc0", 00:14:46.539 "num_blocks": 8192, 00:14:46.539 "block_size": 4096, 00:14:46.539 "physical_block_size": 4096, 00:14:46.539 "uuid": "9a08a002-ca84-4318-8355-69f9f676aeec", 00:14:46.539 "optimal_io_boundary": 0 00:14:46.539 } 00:14:46.539 }, 00:14:46.539 { 00:14:46.539 "method": "bdev_wait_for_examine" 00:14:46.539 } 00:14:46.539 ] 00:14:46.539 }, 00:14:46.539 { 00:14:46.539 "subsystem": "nbd", 00:14:46.539 "config": [] 00:14:46.539 }, 00:14:46.539 { 00:14:46.539 "subsystem": "scheduler", 00:14:46.539 "config": [ 00:14:46.539 { 00:14:46.539 "method": "framework_set_scheduler", 00:14:46.539 "params": { 00:14:46.539 "name": "static" 00:14:46.539 } 00:14:46.539 } 00:14:46.539 ] 00:14:46.539 }, 00:14:46.539 { 00:14:46.539 "subsystem": "nvmf", 00:14:46.539 "config": [ 00:14:46.539 { 00:14:46.539 "method": "nvmf_set_config", 00:14:46.539 "params": { 00:14:46.539 "discovery_filter": "match_any", 00:14:46.539 "admin_cmd_passthru": { 00:14:46.539 "identify_ctrlr": false 00:14:46.539 } 00:14:46.539 } 00:14:46.539 }, 00:14:46.539 { 00:14:46.539 "method": "nvmf_set_max_subsystems", 00:14:46.539 "params": { 00:14:46.539 "max_subsystems": 1024 00:14:46.539 } 00:14:46.539 }, 00:14:46.539 { 00:14:46.539 "method": "nvmf_set_crdt", 00:14:46.539 "params": { 00:14:46.539 "crdt1": 0, 00:14:46.539 "crdt2": 0, 00:14:46.539 "crdt3": 0 00:14:46.539 } 00:14:46.539 }, 00:14:46.539 { 00:14:46.539 "method": "nvmf_create_transport", 00:14:46.539 "params": { 00:14:46.539 "trtype": "TCP", 00:14:46.539 "max_queue_depth": 128, 00:14:46.539 "max_io_qpairs_per_ctrlr": 127, 00:14:46.539 "in_capsule_data_size": 4096, 00:14:46.539 "max_io_size": 131072, 00:14:46.539 "io_unit_size": 131072, 00:14:46.539 "max_aq_depth": 128, 00:14:46.539 "num_shared_buffers": 511, 00:14:46.539 "buf_cache_size": 4294967295, 00:14:46.539 "dif_insert_or_strip": false, 00:14:46.539 "zcopy": false, 00:14:46.539 "c2h_success": false, 00:14:46.539 "sock_priority": 0, 00:14:46.539 "abort_timeout_sec": 1, 00:14:46.539 "ack_timeout": 0 00:14:46.539 } 00:14:46.539 }, 00:14:46.539 { 00:14:46.539 "method": "nvmf_create_subsystem", 00:14:46.539 "params": { 00:14:46.539 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.539 "allow_any_host": false, 00:14:46.539 "serial_number": "SPDK00000000000001", 00:14:46.539 "model_number": "SPDK bdev Controller", 00:14:46.539 "max_namespaces": 10, 00:14:46.539 "min_cntlid": 1, 00:14:46.539 "max_cntlid": 65519, 00:14:46.539 "ana_reporting": false 00:14:46.539 } 00:14:46.539 }, 00:14:46.539 { 00:14:46.539 "method": "nvmf_subsystem_add_host", 00:14:46.539 "params": { 00:14:46.539 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.539 "host": "nqn.2016-06.io.spdk:host1", 00:14:46.539 "psk": "/tmp/tmp.1Zj0dXTB8P" 00:14:46.539 } 00:14:46.539 }, 00:14:46.539 { 00:14:46.539 "method": "nvmf_subsystem_add_ns", 00:14:46.539 "params": { 00:14:46.539 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.539 "namespace": { 00:14:46.539 "nsid": 1, 00:14:46.539 "bdev_name": "malloc0", 00:14:46.539 "nguid": "9A08A002CA844318835569F9F676AEEC", 00:14:46.539 "uuid": "9a08a002-ca84-4318-8355-69f9f676aeec", 00:14:46.539 "no_auto_visible": false 00:14:46.539 } 00:14:46.539 } 00:14:46.539 }, 00:14:46.539 { 00:14:46.539 "method": "nvmf_subsystem_add_listener", 00:14:46.539 "params": { 00:14:46.539 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.539 "listen_address": { 00:14:46.539 "trtype": "TCP", 00:14:46.539 "adrfam": "IPv4", 00:14:46.539 "traddr": "10.0.0.2", 00:14:46.539 "trsvcid": "4420" 00:14:46.539 }, 00:14:46.539 "secure_channel": true 00:14:46.539 } 00:14:46.539 } 00:14:46.539 ] 00:14:46.539 } 00:14:46.539 ] 00:14:46.539 }' 00:14:46.539 13:43:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:46.539 13:43:49 -- common/autotest_common.sh@10 -- # set +x 00:14:46.539 13:43:49 -- nvmf/common.sh@470 -- # nvmfpid=2607885 00:14:46.539 13:43:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:46.539 13:43:49 -- nvmf/common.sh@471 -- # waitforlisten 2607885 00:14:46.539 13:43:49 -- common/autotest_common.sh@817 -- # '[' -z 2607885 ']' 00:14:46.539 13:43:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.539 13:43:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:46.539 13:43:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.539 13:43:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:46.539 13:43:49 -- common/autotest_common.sh@10 -- # set +x 00:14:46.539 [2024-04-18 13:43:49.261210] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:14:46.539 [2024-04-18 13:43:49.261299] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.539 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.539 [2024-04-18 13:43:49.329493] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.797 [2024-04-18 13:43:49.440616] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.797 [2024-04-18 13:43:49.440680] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.797 [2024-04-18 13:43:49.440708] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.797 [2024-04-18 13:43:49.440719] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.797 [2024-04-18 13:43:49.440729] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.797 [2024-04-18 13:43:49.440840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.055 [2024-04-18 13:43:49.672854] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.056 [2024-04-18 13:43:49.688810] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:47.056 [2024-04-18 13:43:49.704874] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:47.056 [2024-04-18 13:43:49.721371] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.622 13:43:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:47.622 13:43:50 -- common/autotest_common.sh@850 -- # return 0 00:14:47.622 13:43:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:47.622 13:43:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:47.622 13:43:50 -- common/autotest_common.sh@10 -- # set +x 00:14:47.622 13:43:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.622 13:43:50 -- target/tls.sh@207 -- # bdevperf_pid=2608036 00:14:47.622 13:43:50 -- target/tls.sh@208 -- # waitforlisten 2608036 /var/tmp/bdevperf.sock 00:14:47.622 13:43:50 -- common/autotest_common.sh@817 -- # '[' -z 2608036 ']' 00:14:47.622 13:43:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:47.622 13:43:50 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:47.622 13:43:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:47.622 13:43:50 -- target/tls.sh@204 -- # echo '{ 00:14:47.622 "subsystems": [ 00:14:47.622 { 00:14:47.622 "subsystem": "keyring", 00:14:47.622 "config": [] 00:14:47.622 }, 00:14:47.622 { 00:14:47.622 "subsystem": "iobuf", 00:14:47.622 "config": [ 00:14:47.622 { 00:14:47.622 "method": "iobuf_set_options", 00:14:47.622 "params": { 00:14:47.622 "small_pool_count": 8192, 00:14:47.622 "large_pool_count": 1024, 00:14:47.622 "small_bufsize": 8192, 00:14:47.622 "large_bufsize": 135168 00:14:47.622 } 00:14:47.622 } 00:14:47.622 ] 00:14:47.622 }, 00:14:47.622 { 00:14:47.622 "subsystem": "sock", 00:14:47.622 "config": [ 00:14:47.622 { 00:14:47.622 "method": "sock_impl_set_options", 00:14:47.622 "params": { 00:14:47.622 "impl_name": "posix", 00:14:47.622 "recv_buf_size": 2097152, 00:14:47.622 "send_buf_size": 2097152, 00:14:47.622 "enable_recv_pipe": true, 00:14:47.622 "enable_quickack": false, 00:14:47.622 "enable_placement_id": 0, 00:14:47.622 "enable_zerocopy_send_server": true, 00:14:47.622 "enable_zerocopy_send_client": false, 00:14:47.622 "zerocopy_threshold": 0, 00:14:47.622 "tls_version": 0, 00:14:47.622 "enable_ktls": false 00:14:47.622 } 00:14:47.622 }, 00:14:47.622 { 00:14:47.622 "method": "sock_impl_set_options", 00:14:47.622 "params": { 00:14:47.622 "impl_name": "ssl", 00:14:47.622 "recv_buf_size": 4096, 00:14:47.622 "send_buf_size": 4096, 00:14:47.622 "enable_recv_pipe": true, 00:14:47.622 "enable_quickack": false, 00:14:47.622 "enable_placement_id": 0, 00:14:47.622 "enable_zerocopy_send_server": true, 00:14:47.622 "enable_zerocopy_send_client": false, 00:14:47.622 "zerocopy_threshold": 0, 00:14:47.622 "tls_version": 0, 00:14:47.622 "enable_ktls": false 00:14:47.622 } 00:14:47.622 } 00:14:47.622 ] 00:14:47.622 }, 00:14:47.622 { 00:14:47.622 "subsystem": "vmd", 00:14:47.622 "config": [] 00:14:47.622 }, 00:14:47.622 { 00:14:47.622 "subsystem": "accel", 00:14:47.622 "config": [ 00:14:47.622 { 00:14:47.622 "method": "accel_set_options", 00:14:47.622 "params": { 00:14:47.622 "small_cache_size": 128, 00:14:47.622 "large_cache_size": 16, 00:14:47.622 "task_count": 2048, 00:14:47.622 "sequence_count": 2048, 00:14:47.622 "buf_count": 2048 00:14:47.622 } 00:14:47.622 } 00:14:47.622 ] 00:14:47.622 }, 00:14:47.622 { 00:14:47.622 "subsystem": "bdev", 00:14:47.622 "config": [ 00:14:47.622 { 00:14:47.622 "method": "bdev_set_options", 00:14:47.622 "params": { 00:14:47.622 "bdev_io_pool_size": 65535, 00:14:47.622 "bdev_io_cache_size": 256, 00:14:47.622 "bdev_auto_examine": true, 00:14:47.622 "iobuf_small_cache_size": 128, 00:14:47.622 "iobuf_large_cache_size": 16 00:14:47.622 } 00:14:47.622 }, 00:14:47.622 { 00:14:47.622 "method": "bdev_raid_set_options", 00:14:47.622 "params": { 00:14:47.622 "process_window_size_kb": 1024 00:14:47.622 } 00:14:47.622 }, 00:14:47.622 { 00:14:47.622 "method": "bdev_iscsi_set_options", 00:14:47.622 "params": { 00:14:47.622 "timeout_sec": 30 00:14:47.622 } 00:14:47.622 }, 00:14:47.622 { 00:14:47.622 "method": "bdev_nvme_set_options", 00:14:47.622 "params": { 00:14:47.622 "action_on_timeout": "none", 00:14:47.622 "timeout_us": 0, 00:14:47.622 "timeout_admin_us": 0, 00:14:47.622 "keep_alive_timeout_ms": 10000, 00:14:47.622 "arbitration_burst": 0, 00:14:47.622 "low_priority_weight": 0, 00:14:47.622 "medium_priority_weight": 0, 00:14:47.622 "high_priority_weight": 0, 00:14:47.622 "nvme_adminq_poll_period_us": 10000, 00:14:47.622 "nvme_ioq_poll_period_us": 0, 00:14:47.622 "io_queue_requests": 512, 00:14:47.622 "delay_cmd_submit": true, 00:14:47.622 "transport_retry_count": 4, 00:14:47.622 "bdev_retry_count": 3, 00:14:47.622 "transport_ack_timeout": 0, 00:14:47.622 "ctrlr_loss_timeout_sec": 0, 00:14:47.622 "reconnect_delay_sec": 0, 00:14:47.622 "fast_io_fail_timeout_sec": 0, 00:14:47.622 "disable_auto_failback": false, 00:14:47.622 "generate_uuids": false, 00:14:47.622 "transport_tos": 0, 00:14:47.622 "nvme_error_stat": false, 00:14:47.622 "rdma_srq_size": 0, 00:14:47.622 "io_path_stat": false, 00:14:47.622 "allow_accel_sequence": false, 00:14:47.622 "rdma_max_cq_size": 0, 00:14:47.622 "rdma_cm_event_timeout_ms": 0, 00:14:47.622 "dhchap_digests": [ 00:14:47.622 "sha256", 00:14:47.622 "sha384", 00:14:47.622 "sha512" 00:14:47.622 ], 00:14:47.622 "dhchap_dhgroups": [ 00:14:47.622 "null", 00:14:47.622 "ffdhe2048", 00:14:47.622 "ffdhe3072", 00:14:47.622 "ffdhe4096", 00:14:47.622 "ffdhe6144", 00:14:47.622 "ffdhe8192" 00:14:47.622 ] 00:14:47.622 } 00:14:47.622 }, 00:14:47.622 { 00:14:47.622 "method": "bdev_nvme_attach_controller", 00:14:47.622 "params": { 00:14:47.622 "name": "TLSTEST", 00:14:47.622 "trtype": "TCP", 00:14:47.622 "adrfam": "IPv4", 00:14:47.622 "traddr": "10.0.0.2", 00:14:47.622 "trsvcid": "4420", 00:14:47.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.622 "prchk_reftag": false, 00:14:47.623 "prchk_guard": false, 00:14:47.623 "ctrlr_loss_timeout_sec": 0, 00:14:47.623 "reconnect_delay_sec": 0, 00:14:47.623 "fast_io_fail_timeout_sec": 0, 00:14:47.623 "psk": "/tmp/tmp.1Zj0dXTB8P", 00:14:47.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:47.623 "hdgst": false, 00:14:47.623 "ddgst": false 00:14:47.623 } 00:14:47.623 }, 00:14:47.623 { 00:14:47.623 "method": "bdev_nvme_set_hotplug", 00:14:47.623 "params": { 00:14:47.623 "period_us": 100000, 00:14:47.623 "enable": false 00:14:47.623 } 00:14:47.623 }, 00:14:47.623 { 00:14:47.623 "method": "bdev_wait_for_examine" 00:14:47.623 } 00:14:47.623 ] 00:14:47.623 }, 00:14:47.623 { 00:14:47.623 "subsystem": "nbd", 00:14:47.623 "config": [] 00:14:47.623 } 00:14:47.623 ] 00:14:47.623 }' 00:14:47.623 13:43:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:47.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:47.623 13:43:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:47.623 13:43:50 -- common/autotest_common.sh@10 -- # set +x 00:14:47.623 [2024-04-18 13:43:50.255085] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:14:47.623 [2024-04-18 13:43:50.255185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2608036 ] 00:14:47.623 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.623 [2024-04-18 13:43:50.315127] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.623 [2024-04-18 13:43:50.425740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.881 [2024-04-18 13:43:50.595924] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:47.881 [2024-04-18 13:43:50.596075] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:48.446 13:43:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:48.446 13:43:51 -- common/autotest_common.sh@850 -- # return 0 00:14:48.446 13:43:51 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:48.703 Running I/O for 10 seconds... 00:14:58.663 00:14:58.663 Latency(us) 00:14:58.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.663 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:58.663 Verification LBA range: start 0x0 length 0x2000 00:14:58.663 TLSTESTn1 : 10.03 3631.53 14.19 0.00 0.00 35181.36 9223.59 66021.45 00:14:58.663 =================================================================================================================== 00:14:58.663 Total : 3631.53 14.19 0.00 0.00 35181.36 9223.59 66021.45 00:14:58.663 0 00:14:58.663 13:44:01 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:58.663 13:44:01 -- target/tls.sh@214 -- # killprocess 2608036 00:14:58.663 13:44:01 -- common/autotest_common.sh@936 -- # '[' -z 2608036 ']' 00:14:58.663 13:44:01 -- common/autotest_common.sh@940 -- # kill -0 2608036 00:14:58.663 13:44:01 -- common/autotest_common.sh@941 -- # uname 00:14:58.663 13:44:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:58.663 13:44:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2608036 00:14:58.663 13:44:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:58.663 13:44:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:58.663 13:44:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2608036' 00:14:58.663 killing process with pid 2608036 00:14:58.663 13:44:01 -- common/autotest_common.sh@955 -- # kill 2608036 00:14:58.663 Received shutdown signal, test time was about 10.000000 seconds 00:14:58.663 00:14:58.663 Latency(us) 00:14:58.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.663 =================================================================================================================== 00:14:58.663 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:58.663 [2024-04-18 13:44:01.366377] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:58.663 13:44:01 -- common/autotest_common.sh@960 -- # wait 2608036 00:14:58.920 13:44:01 -- target/tls.sh@215 -- # killprocess 2607885 00:14:58.920 13:44:01 -- common/autotest_common.sh@936 -- # '[' -z 2607885 ']' 00:14:58.920 13:44:01 -- common/autotest_common.sh@940 -- # kill -0 2607885 00:14:58.920 13:44:01 -- common/autotest_common.sh@941 -- # uname 00:14:58.920 13:44:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:58.920 13:44:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2607885 00:14:58.920 13:44:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:58.920 13:44:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:58.920 13:44:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2607885' 00:14:58.920 killing process with pid 2607885 00:14:58.920 13:44:01 -- common/autotest_common.sh@955 -- # kill 2607885 00:14:58.920 [2024-04-18 13:44:01.664285] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:58.920 13:44:01 -- common/autotest_common.sh@960 -- # wait 2607885 00:14:59.178 13:44:01 -- target/tls.sh@218 -- # nvmfappstart 00:14:59.178 13:44:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:59.178 13:44:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:59.178 13:44:01 -- common/autotest_common.sh@10 -- # set +x 00:14:59.178 13:44:01 -- nvmf/common.sh@470 -- # nvmfpid=2609403 00:14:59.178 13:44:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:59.178 13:44:01 -- nvmf/common.sh@471 -- # waitforlisten 2609403 00:14:59.178 13:44:01 -- common/autotest_common.sh@817 -- # '[' -z 2609403 ']' 00:14:59.178 13:44:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.178 13:44:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:59.178 13:44:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.178 13:44:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:59.178 13:44:01 -- common/autotest_common.sh@10 -- # set +x 00:14:59.435 [2024-04-18 13:44:02.016130] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:14:59.435 [2024-04-18 13:44:02.016243] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.435 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.435 [2024-04-18 13:44:02.084034] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.435 [2024-04-18 13:44:02.199029] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.435 [2024-04-18 13:44:02.199094] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.435 [2024-04-18 13:44:02.199110] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.435 [2024-04-18 13:44:02.199123] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.436 [2024-04-18 13:44:02.199134] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.436 [2024-04-18 13:44:02.199184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.367 13:44:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:00.367 13:44:02 -- common/autotest_common.sh@850 -- # return 0 00:15:00.367 13:44:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:00.367 13:44:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:00.367 13:44:02 -- common/autotest_common.sh@10 -- # set +x 00:15:00.367 13:44:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.367 13:44:02 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.1Zj0dXTB8P 00:15:00.367 13:44:02 -- target/tls.sh@49 -- # local key=/tmp/tmp.1Zj0dXTB8P 00:15:00.367 13:44:02 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:00.625 [2024-04-18 13:44:03.184010] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.625 13:44:03 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:00.883 13:44:03 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:00.883 [2024-04-18 13:44:03.665309] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:00.883 [2024-04-18 13:44:03.665588] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.883 13:44:03 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:01.141 malloc0 00:15:01.141 13:44:03 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:01.398 13:44:04 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1Zj0dXTB8P 00:15:01.656 [2024-04-18 13:44:04.390310] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:01.656 13:44:04 -- target/tls.sh@222 -- # bdevperf_pid=2609776 00:15:01.656 13:44:04 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:01.656 13:44:04 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:01.656 13:44:04 -- target/tls.sh@225 -- # waitforlisten 2609776 /var/tmp/bdevperf.sock 00:15:01.656 13:44:04 -- common/autotest_common.sh@817 -- # '[' -z 2609776 ']' 00:15:01.656 13:44:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:01.656 13:44:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:01.656 13:44:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:01.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:01.656 13:44:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:01.656 13:44:04 -- common/autotest_common.sh@10 -- # set +x 00:15:01.656 [2024-04-18 13:44:04.454328] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:15:01.656 [2024-04-18 13:44:04.454398] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2609776 ] 00:15:01.913 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.913 [2024-04-18 13:44:04.516959] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.913 [2024-04-18 13:44:04.630948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.847 13:44:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:02.847 13:44:05 -- common/autotest_common.sh@850 -- # return 0 00:15:02.847 13:44:05 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1Zj0dXTB8P 00:15:02.847 13:44:05 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:03.105 [2024-04-18 13:44:05.850140] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:03.363 nvme0n1 00:15:03.363 13:44:05 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:03.363 Running I/O for 1 seconds... 00:15:04.327 00:15:04.327 Latency(us) 00:15:04.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.327 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:04.327 Verification LBA range: start 0x0 length 0x2000 00:15:04.327 nvme0n1 : 1.03 3023.51 11.81 0.00 0.00 41824.78 9369.22 95925.29 00:15:04.327 =================================================================================================================== 00:15:04.327 Total : 3023.51 11.81 0.00 0.00 41824.78 9369.22 95925.29 00:15:04.327 0 00:15:04.327 13:44:07 -- target/tls.sh@234 -- # killprocess 2609776 00:15:04.327 13:44:07 -- common/autotest_common.sh@936 -- # '[' -z 2609776 ']' 00:15:04.327 13:44:07 -- common/autotest_common.sh@940 -- # kill -0 2609776 00:15:04.327 13:44:07 -- common/autotest_common.sh@941 -- # uname 00:15:04.327 13:44:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:04.327 13:44:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2609776 00:15:04.327 13:44:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:04.327 13:44:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:04.327 13:44:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2609776' 00:15:04.327 killing process with pid 2609776 00:15:04.327 13:44:07 -- common/autotest_common.sh@955 -- # kill 2609776 00:15:04.327 Received shutdown signal, test time was about 1.000000 seconds 00:15:04.327 00:15:04.327 Latency(us) 00:15:04.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.327 =================================================================================================================== 00:15:04.327 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:04.327 13:44:07 -- common/autotest_common.sh@960 -- # wait 2609776 00:15:04.896 13:44:07 -- target/tls.sh@235 -- # killprocess 2609403 00:15:04.896 13:44:07 -- common/autotest_common.sh@936 -- # '[' -z 2609403 ']' 00:15:04.896 13:44:07 -- common/autotest_common.sh@940 -- # kill -0 2609403 00:15:04.896 13:44:07 -- common/autotest_common.sh@941 -- # uname 00:15:04.896 13:44:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:04.896 13:44:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2609403 00:15:04.896 13:44:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:04.896 13:44:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:04.896 13:44:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2609403' 00:15:04.896 killing process with pid 2609403 00:15:04.896 13:44:07 -- common/autotest_common.sh@955 -- # kill 2609403 00:15:04.896 [2024-04-18 13:44:07.424944] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:04.896 13:44:07 -- common/autotest_common.sh@960 -- # wait 2609403 00:15:05.156 13:44:07 -- target/tls.sh@238 -- # nvmfappstart 00:15:05.156 13:44:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:05.156 13:44:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:05.156 13:44:07 -- common/autotest_common.sh@10 -- # set +x 00:15:05.156 13:44:07 -- nvmf/common.sh@470 -- # nvmfpid=2610186 00:15:05.156 13:44:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:05.156 13:44:07 -- nvmf/common.sh@471 -- # waitforlisten 2610186 00:15:05.156 13:44:07 -- common/autotest_common.sh@817 -- # '[' -z 2610186 ']' 00:15:05.156 13:44:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.156 13:44:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:05.156 13:44:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.156 13:44:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:05.156 13:44:07 -- common/autotest_common.sh@10 -- # set +x 00:15:05.156 [2024-04-18 13:44:07.764920] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:15:05.156 [2024-04-18 13:44:07.765002] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.156 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.156 [2024-04-18 13:44:07.833029] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.156 [2024-04-18 13:44:07.944858] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.156 [2024-04-18 13:44:07.944930] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.156 [2024-04-18 13:44:07.944947] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.156 [2024-04-18 13:44:07.944961] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.156 [2024-04-18 13:44:07.944973] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.156 [2024-04-18 13:44:07.945016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.092 13:44:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:06.092 13:44:08 -- common/autotest_common.sh@850 -- # return 0 00:15:06.092 13:44:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:06.092 13:44:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:06.092 13:44:08 -- common/autotest_common.sh@10 -- # set +x 00:15:06.092 13:44:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.092 13:44:08 -- target/tls.sh@239 -- # rpc_cmd 00:15:06.092 13:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.092 13:44:08 -- common/autotest_common.sh@10 -- # set +x 00:15:06.092 [2024-04-18 13:44:08.710271] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.092 malloc0 00:15:06.092 [2024-04-18 13:44:08.742621] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:06.092 [2024-04-18 13:44:08.742879] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.092 13:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.092 13:44:08 -- target/tls.sh@252 -- # bdevperf_pid=2610340 00:15:06.092 13:44:08 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:06.092 13:44:08 -- target/tls.sh@254 -- # waitforlisten 2610340 /var/tmp/bdevperf.sock 00:15:06.092 13:44:08 -- common/autotest_common.sh@817 -- # '[' -z 2610340 ']' 00:15:06.092 13:44:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:06.092 13:44:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:06.092 13:44:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:06.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:06.092 13:44:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:06.092 13:44:08 -- common/autotest_common.sh@10 -- # set +x 00:15:06.092 [2024-04-18 13:44:08.812309] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:15:06.092 [2024-04-18 13:44:08.812388] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610340 ] 00:15:06.092 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.092 [2024-04-18 13:44:08.873841] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.352 [2024-04-18 13:44:08.987891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.291 13:44:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:07.291 13:44:09 -- common/autotest_common.sh@850 -- # return 0 00:15:07.291 13:44:09 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1Zj0dXTB8P 00:15:07.291 13:44:09 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:07.548 [2024-04-18 13:44:10.223480] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:07.548 nvme0n1 00:15:07.548 13:44:10 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:07.805 Running I/O for 1 seconds... 00:15:08.738 00:15:08.738 Latency(us) 00:15:08.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.738 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:08.738 Verification LBA range: start 0x0 length 0x2000 00:15:08.738 nvme0n1 : 1.02 3144.67 12.28 0.00 0.00 40216.27 7281.78 46020.84 00:15:08.738 =================================================================================================================== 00:15:08.738 Total : 3144.67 12.28 0.00 0.00 40216.27 7281.78 46020.84 00:15:08.738 0 00:15:08.738 13:44:11 -- target/tls.sh@263 -- # rpc_cmd save_config 00:15:08.738 13:44:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:08.738 13:44:11 -- common/autotest_common.sh@10 -- # set +x 00:15:08.998 13:44:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:08.998 13:44:11 -- target/tls.sh@263 -- # tgtcfg='{ 00:15:08.998 "subsystems": [ 00:15:08.998 { 00:15:08.998 "subsystem": "keyring", 00:15:08.998 "config": [ 00:15:08.998 { 00:15:08.998 "method": "keyring_file_add_key", 00:15:08.998 "params": { 00:15:08.998 "name": "key0", 00:15:08.998 "path": "/tmp/tmp.1Zj0dXTB8P" 00:15:08.998 } 00:15:08.998 } 00:15:08.998 ] 00:15:08.998 }, 00:15:08.998 { 00:15:08.998 "subsystem": "iobuf", 00:15:08.998 "config": [ 00:15:08.998 { 00:15:08.998 "method": "iobuf_set_options", 00:15:08.998 "params": { 00:15:08.998 "small_pool_count": 8192, 00:15:08.998 "large_pool_count": 1024, 00:15:08.998 "small_bufsize": 8192, 00:15:08.998 "large_bufsize": 135168 00:15:08.998 } 00:15:08.998 } 00:15:08.998 ] 00:15:08.998 }, 00:15:08.998 { 00:15:08.998 "subsystem": "sock", 00:15:08.998 "config": [ 00:15:08.998 { 00:15:08.998 "method": "sock_impl_set_options", 00:15:08.998 "params": { 00:15:08.998 "impl_name": "posix", 00:15:08.998 "recv_buf_size": 2097152, 00:15:08.998 "send_buf_size": 2097152, 00:15:08.998 "enable_recv_pipe": true, 00:15:08.998 "enable_quickack": false, 00:15:08.998 "enable_placement_id": 0, 00:15:08.998 "enable_zerocopy_send_server": true, 00:15:08.998 "enable_zerocopy_send_client": false, 00:15:08.998 "zerocopy_threshold": 0, 00:15:08.998 "tls_version": 0, 00:15:08.998 "enable_ktls": false 00:15:08.998 } 00:15:08.998 }, 00:15:08.998 { 00:15:08.998 "method": "sock_impl_set_options", 00:15:08.998 "params": { 00:15:08.998 "impl_name": "ssl", 00:15:08.998 "recv_buf_size": 4096, 00:15:08.998 "send_buf_size": 4096, 00:15:08.998 "enable_recv_pipe": true, 00:15:08.998 "enable_quickack": false, 00:15:08.998 "enable_placement_id": 0, 00:15:08.998 "enable_zerocopy_send_server": true, 00:15:08.998 "enable_zerocopy_send_client": false, 00:15:08.998 "zerocopy_threshold": 0, 00:15:08.998 "tls_version": 0, 00:15:08.998 "enable_ktls": false 00:15:08.998 } 00:15:08.998 } 00:15:08.998 ] 00:15:08.998 }, 00:15:08.998 { 00:15:08.999 "subsystem": "vmd", 00:15:08.999 "config": [] 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "subsystem": "accel", 00:15:08.999 "config": [ 00:15:08.999 { 00:15:08.999 "method": "accel_set_options", 00:15:08.999 "params": { 00:15:08.999 "small_cache_size": 128, 00:15:08.999 "large_cache_size": 16, 00:15:08.999 "task_count": 2048, 00:15:08.999 "sequence_count": 2048, 00:15:08.999 "buf_count": 2048 00:15:08.999 } 00:15:08.999 } 00:15:08.999 ] 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "subsystem": "bdev", 00:15:08.999 "config": [ 00:15:08.999 { 00:15:08.999 "method": "bdev_set_options", 00:15:08.999 "params": { 00:15:08.999 "bdev_io_pool_size": 65535, 00:15:08.999 "bdev_io_cache_size": 256, 00:15:08.999 "bdev_auto_examine": true, 00:15:08.999 "iobuf_small_cache_size": 128, 00:15:08.999 "iobuf_large_cache_size": 16 00:15:08.999 } 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "method": "bdev_raid_set_options", 00:15:08.999 "params": { 00:15:08.999 "process_window_size_kb": 1024 00:15:08.999 } 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "method": "bdev_iscsi_set_options", 00:15:08.999 "params": { 00:15:08.999 "timeout_sec": 30 00:15:08.999 } 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "method": "bdev_nvme_set_options", 00:15:08.999 "params": { 00:15:08.999 "action_on_timeout": "none", 00:15:08.999 "timeout_us": 0, 00:15:08.999 "timeout_admin_us": 0, 00:15:08.999 "keep_alive_timeout_ms": 10000, 00:15:08.999 "arbitration_burst": 0, 00:15:08.999 "low_priority_weight": 0, 00:15:08.999 "medium_priority_weight": 0, 00:15:08.999 "high_priority_weight": 0, 00:15:08.999 "nvme_adminq_poll_period_us": 10000, 00:15:08.999 "nvme_ioq_poll_period_us": 0, 00:15:08.999 "io_queue_requests": 0, 00:15:08.999 "delay_cmd_submit": true, 00:15:08.999 "transport_retry_count": 4, 00:15:08.999 "bdev_retry_count": 3, 00:15:08.999 "transport_ack_timeout": 0, 00:15:08.999 "ctrlr_loss_timeout_sec": 0, 00:15:08.999 "reconnect_delay_sec": 0, 00:15:08.999 "fast_io_fail_timeout_sec": 0, 00:15:08.999 "disable_auto_failback": false, 00:15:08.999 "generate_uuids": false, 00:15:08.999 "transport_tos": 0, 00:15:08.999 "nvme_error_stat": false, 00:15:08.999 "rdma_srq_size": 0, 00:15:08.999 "io_path_stat": false, 00:15:08.999 "allow_accel_sequence": false, 00:15:08.999 "rdma_max_cq_size": 0, 00:15:08.999 "rdma_cm_event_timeout_ms": 0, 00:15:08.999 "dhchap_digests": [ 00:15:08.999 "sha256", 00:15:08.999 "sha384", 00:15:08.999 "sha512" 00:15:08.999 ], 00:15:08.999 "dhchap_dhgroups": [ 00:15:08.999 "null", 00:15:08.999 "ffdhe2048", 00:15:08.999 "ffdhe3072", 00:15:08.999 "ffdhe4096", 00:15:08.999 "ffdhe6144", 00:15:08.999 "ffdhe8192" 00:15:08.999 ] 00:15:08.999 } 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "method": "bdev_nvme_set_hotplug", 00:15:08.999 "params": { 00:15:08.999 "period_us": 100000, 00:15:08.999 "enable": false 00:15:08.999 } 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "method": "bdev_malloc_create", 00:15:08.999 "params": { 00:15:08.999 "name": "malloc0", 00:15:08.999 "num_blocks": 8192, 00:15:08.999 "block_size": 4096, 00:15:08.999 "physical_block_size": 4096, 00:15:08.999 "uuid": "10a97898-3c27-41c4-bde5-428824f3649e", 00:15:08.999 "optimal_io_boundary": 0 00:15:08.999 } 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "method": "bdev_wait_for_examine" 00:15:08.999 } 00:15:08.999 ] 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "subsystem": "nbd", 00:15:08.999 "config": [] 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "subsystem": "scheduler", 00:15:08.999 "config": [ 00:15:08.999 { 00:15:08.999 "method": "framework_set_scheduler", 00:15:08.999 "params": { 00:15:08.999 "name": "static" 00:15:08.999 } 00:15:08.999 } 00:15:08.999 ] 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "subsystem": "nvmf", 00:15:08.999 "config": [ 00:15:08.999 { 00:15:08.999 "method": "nvmf_set_config", 00:15:08.999 "params": { 00:15:08.999 "discovery_filter": "match_any", 00:15:08.999 "admin_cmd_passthru": { 00:15:08.999 "identify_ctrlr": false 00:15:08.999 } 00:15:08.999 } 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "method": "nvmf_set_max_subsystems", 00:15:08.999 "params": { 00:15:08.999 "max_subsystems": 1024 00:15:08.999 } 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "method": "nvmf_set_crdt", 00:15:08.999 "params": { 00:15:08.999 "crdt1": 0, 00:15:08.999 "crdt2": 0, 00:15:08.999 "crdt3": 0 00:15:08.999 } 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "method": "nvmf_create_transport", 00:15:08.999 "params": { 00:15:08.999 "trtype": "TCP", 00:15:08.999 "max_queue_depth": 128, 00:15:08.999 "max_io_qpairs_per_ctrlr": 127, 00:15:08.999 "in_capsule_data_size": 4096, 00:15:08.999 "max_io_size": 131072, 00:15:08.999 "io_unit_size": 131072, 00:15:08.999 "max_aq_depth": 128, 00:15:08.999 "num_shared_buffers": 511, 00:15:08.999 "buf_cache_size": 4294967295, 00:15:08.999 "dif_insert_or_strip": false, 00:15:08.999 "zcopy": false, 00:15:08.999 "c2h_success": false, 00:15:08.999 "sock_priority": 0, 00:15:08.999 "abort_timeout_sec": 1, 00:15:08.999 "ack_timeout": 0 00:15:08.999 } 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "method": "nvmf_create_subsystem", 00:15:08.999 "params": { 00:15:08.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.999 "allow_any_host": false, 00:15:08.999 "serial_number": "00000000000000000000", 00:15:08.999 "model_number": "SPDK bdev Controller", 00:15:08.999 "max_namespaces": 32, 00:15:08.999 "min_cntlid": 1, 00:15:08.999 "max_cntlid": 65519, 00:15:08.999 "ana_reporting": false 00:15:08.999 } 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "method": "nvmf_subsystem_add_host", 00:15:08.999 "params": { 00:15:08.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.999 "host": "nqn.2016-06.io.spdk:host1", 00:15:08.999 "psk": "key0" 00:15:08.999 } 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "method": "nvmf_subsystem_add_ns", 00:15:08.999 "params": { 00:15:08.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.999 "namespace": { 00:15:08.999 "nsid": 1, 00:15:08.999 "bdev_name": "malloc0", 00:15:08.999 "nguid": "10A978983C2741C4BDE5428824F3649E", 00:15:08.999 "uuid": "10a97898-3c27-41c4-bde5-428824f3649e", 00:15:08.999 "no_auto_visible": false 00:15:08.999 } 00:15:08.999 } 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "method": "nvmf_subsystem_add_listener", 00:15:08.999 "params": { 00:15:08.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.999 "listen_address": { 00:15:08.999 "trtype": "TCP", 00:15:08.999 "adrfam": "IPv4", 00:15:08.999 "traddr": "10.0.0.2", 00:15:08.999 "trsvcid": "4420" 00:15:08.999 }, 00:15:08.999 "secure_channel": true 00:15:08.999 } 00:15:08.999 } 00:15:08.999 ] 00:15:08.999 } 00:15:08.999 ] 00:15:08.999 }' 00:15:08.999 13:44:11 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:09.258 13:44:11 -- target/tls.sh@264 -- # bperfcfg='{ 00:15:09.258 "subsystems": [ 00:15:09.258 { 00:15:09.258 "subsystem": "keyring", 00:15:09.258 "config": [ 00:15:09.258 { 00:15:09.258 "method": "keyring_file_add_key", 00:15:09.258 "params": { 00:15:09.258 "name": "key0", 00:15:09.258 "path": "/tmp/tmp.1Zj0dXTB8P" 00:15:09.258 } 00:15:09.258 } 00:15:09.258 ] 00:15:09.258 }, 00:15:09.258 { 00:15:09.258 "subsystem": "iobuf", 00:15:09.258 "config": [ 00:15:09.258 { 00:15:09.258 "method": "iobuf_set_options", 00:15:09.258 "params": { 00:15:09.259 "small_pool_count": 8192, 00:15:09.259 "large_pool_count": 1024, 00:15:09.259 "small_bufsize": 8192, 00:15:09.259 "large_bufsize": 135168 00:15:09.259 } 00:15:09.259 } 00:15:09.259 ] 00:15:09.259 }, 00:15:09.259 { 00:15:09.259 "subsystem": "sock", 00:15:09.259 "config": [ 00:15:09.259 { 00:15:09.259 "method": "sock_impl_set_options", 00:15:09.259 "params": { 00:15:09.259 "impl_name": "posix", 00:15:09.259 "recv_buf_size": 2097152, 00:15:09.259 "send_buf_size": 2097152, 00:15:09.259 "enable_recv_pipe": true, 00:15:09.259 "enable_quickack": false, 00:15:09.259 "enable_placement_id": 0, 00:15:09.259 "enable_zerocopy_send_server": true, 00:15:09.259 "enable_zerocopy_send_client": false, 00:15:09.259 "zerocopy_threshold": 0, 00:15:09.259 "tls_version": 0, 00:15:09.259 "enable_ktls": false 00:15:09.259 } 00:15:09.259 }, 00:15:09.259 { 00:15:09.259 "method": "sock_impl_set_options", 00:15:09.259 "params": { 00:15:09.259 "impl_name": "ssl", 00:15:09.259 "recv_buf_size": 4096, 00:15:09.259 "send_buf_size": 4096, 00:15:09.259 "enable_recv_pipe": true, 00:15:09.259 "enable_quickack": false, 00:15:09.259 "enable_placement_id": 0, 00:15:09.259 "enable_zerocopy_send_server": true, 00:15:09.259 "enable_zerocopy_send_client": false, 00:15:09.259 "zerocopy_threshold": 0, 00:15:09.259 "tls_version": 0, 00:15:09.259 "enable_ktls": false 00:15:09.259 } 00:15:09.259 } 00:15:09.259 ] 00:15:09.259 }, 00:15:09.259 { 00:15:09.259 "subsystem": "vmd", 00:15:09.259 "config": [] 00:15:09.259 }, 00:15:09.259 { 00:15:09.259 "subsystem": "accel", 00:15:09.259 "config": [ 00:15:09.259 { 00:15:09.259 "method": "accel_set_options", 00:15:09.259 "params": { 00:15:09.259 "small_cache_size": 128, 00:15:09.259 "large_cache_size": 16, 00:15:09.259 "task_count": 2048, 00:15:09.259 "sequence_count": 2048, 00:15:09.259 "buf_count": 2048 00:15:09.259 } 00:15:09.259 } 00:15:09.259 ] 00:15:09.259 }, 00:15:09.259 { 00:15:09.259 "subsystem": "bdev", 00:15:09.259 "config": [ 00:15:09.259 { 00:15:09.259 "method": "bdev_set_options", 00:15:09.259 "params": { 00:15:09.259 "bdev_io_pool_size": 65535, 00:15:09.259 "bdev_io_cache_size": 256, 00:15:09.259 "bdev_auto_examine": true, 00:15:09.259 "iobuf_small_cache_size": 128, 00:15:09.259 "iobuf_large_cache_size": 16 00:15:09.259 } 00:15:09.259 }, 00:15:09.259 { 00:15:09.259 "method": "bdev_raid_set_options", 00:15:09.259 "params": { 00:15:09.259 "process_window_size_kb": 1024 00:15:09.259 } 00:15:09.259 }, 00:15:09.259 { 00:15:09.259 "method": "bdev_iscsi_set_options", 00:15:09.259 "params": { 00:15:09.259 "timeout_sec": 30 00:15:09.259 } 00:15:09.259 }, 00:15:09.259 { 00:15:09.259 "method": "bdev_nvme_set_options", 00:15:09.259 "params": { 00:15:09.259 "action_on_timeout": "none", 00:15:09.259 "timeout_us": 0, 00:15:09.259 "timeout_admin_us": 0, 00:15:09.259 "keep_alive_timeout_ms": 10000, 00:15:09.259 "arbitration_burst": 0, 00:15:09.259 "low_priority_weight": 0, 00:15:09.259 "medium_priority_weight": 0, 00:15:09.259 "high_priority_weight": 0, 00:15:09.259 "nvme_adminq_poll_period_us": 10000, 00:15:09.259 "nvme_ioq_poll_period_us": 0, 00:15:09.259 "io_queue_requests": 512, 00:15:09.259 "delay_cmd_submit": true, 00:15:09.259 "transport_retry_count": 4, 00:15:09.259 "bdev_retry_count": 3, 00:15:09.259 "transport_ack_timeout": 0, 00:15:09.259 "ctrlr_loss_timeout_sec": 0, 00:15:09.259 "reconnect_delay_sec": 0, 00:15:09.259 "fast_io_fail_timeout_sec": 0, 00:15:09.259 "disable_auto_failback": false, 00:15:09.259 "generate_uuids": false, 00:15:09.259 "transport_tos": 0, 00:15:09.259 "nvme_error_stat": false, 00:15:09.259 "rdma_srq_size": 0, 00:15:09.259 "io_path_stat": false, 00:15:09.259 "allow_accel_sequence": false, 00:15:09.259 "rdma_max_cq_size": 0, 00:15:09.259 "rdma_cm_event_timeout_ms": 0, 00:15:09.259 "dhchap_digests": [ 00:15:09.259 "sha256", 00:15:09.259 "sha384", 00:15:09.259 "sha512" 00:15:09.259 ], 00:15:09.259 "dhchap_dhgroups": [ 00:15:09.259 "null", 00:15:09.259 "ffdhe2048", 00:15:09.259 "ffdhe3072", 00:15:09.259 "ffdhe4096", 00:15:09.259 "ffdhe6144", 00:15:09.259 "ffdhe8192" 00:15:09.259 ] 00:15:09.259 } 00:15:09.259 }, 00:15:09.259 { 00:15:09.259 "method": "bdev_nvme_attach_controller", 00:15:09.259 "params": { 00:15:09.259 "name": "nvme0", 00:15:09.259 "trtype": "TCP", 00:15:09.259 "adrfam": "IPv4", 00:15:09.259 "traddr": "10.0.0.2", 00:15:09.259 "trsvcid": "4420", 00:15:09.259 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.259 "prchk_reftag": false, 00:15:09.259 "prchk_guard": false, 00:15:09.259 "ctrlr_loss_timeout_sec": 0, 00:15:09.259 "reconnect_delay_sec": 0, 00:15:09.259 "fast_io_fail_timeout_sec": 0, 00:15:09.259 "psk": "key0", 00:15:09.259 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:09.259 "hdgst": false, 00:15:09.259 "ddgst": false 00:15:09.259 } 00:15:09.259 }, 00:15:09.259 { 00:15:09.259 "method": "bdev_nvme_set_hotplug", 00:15:09.259 "params": { 00:15:09.259 "period_us": 100000, 00:15:09.259 "enable": false 00:15:09.259 } 00:15:09.259 }, 00:15:09.259 { 00:15:09.259 "method": "bdev_enable_histogram", 00:15:09.259 "params": { 00:15:09.259 "name": "nvme0n1", 00:15:09.259 "enable": true 00:15:09.259 } 00:15:09.259 }, 00:15:09.259 { 00:15:09.259 "method": "bdev_wait_for_examine" 00:15:09.259 } 00:15:09.259 ] 00:15:09.259 }, 00:15:09.259 { 00:15:09.259 "subsystem": "nbd", 00:15:09.259 "config": [] 00:15:09.259 } 00:15:09.259 ] 00:15:09.259 }' 00:15:09.259 13:44:11 -- target/tls.sh@266 -- # killprocess 2610340 00:15:09.259 13:44:11 -- common/autotest_common.sh@936 -- # '[' -z 2610340 ']' 00:15:09.259 13:44:11 -- common/autotest_common.sh@940 -- # kill -0 2610340 00:15:09.259 13:44:11 -- common/autotest_common.sh@941 -- # uname 00:15:09.259 13:44:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:09.259 13:44:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2610340 00:15:09.259 13:44:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:09.259 13:44:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:09.259 13:44:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2610340' 00:15:09.259 killing process with pid 2610340 00:15:09.259 13:44:11 -- common/autotest_common.sh@955 -- # kill 2610340 00:15:09.259 Received shutdown signal, test time was about 1.000000 seconds 00:15:09.259 00:15:09.259 Latency(us) 00:15:09.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.259 =================================================================================================================== 00:15:09.259 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:09.259 13:44:11 -- common/autotest_common.sh@960 -- # wait 2610340 00:15:09.519 13:44:12 -- target/tls.sh@267 -- # killprocess 2610186 00:15:09.519 13:44:12 -- common/autotest_common.sh@936 -- # '[' -z 2610186 ']' 00:15:09.519 13:44:12 -- common/autotest_common.sh@940 -- # kill -0 2610186 00:15:09.519 13:44:12 -- common/autotest_common.sh@941 -- # uname 00:15:09.519 13:44:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:09.519 13:44:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2610186 00:15:09.519 13:44:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:09.519 13:44:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:09.519 13:44:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2610186' 00:15:09.519 killing process with pid 2610186 00:15:09.519 13:44:12 -- common/autotest_common.sh@955 -- # kill 2610186 00:15:09.519 13:44:12 -- common/autotest_common.sh@960 -- # wait 2610186 00:15:09.778 13:44:12 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:15:09.779 13:44:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:09.779 13:44:12 -- target/tls.sh@269 -- # echo '{ 00:15:09.779 "subsystems": [ 00:15:09.779 { 00:15:09.779 "subsystem": "keyring", 00:15:09.779 "config": [ 00:15:09.779 { 00:15:09.779 "method": "keyring_file_add_key", 00:15:09.779 "params": { 00:15:09.779 "name": "key0", 00:15:09.779 "path": "/tmp/tmp.1Zj0dXTB8P" 00:15:09.779 } 00:15:09.779 } 00:15:09.779 ] 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "subsystem": "iobuf", 00:15:09.779 "config": [ 00:15:09.779 { 00:15:09.779 "method": "iobuf_set_options", 00:15:09.779 "params": { 00:15:09.779 "small_pool_count": 8192, 00:15:09.779 "large_pool_count": 1024, 00:15:09.779 "small_bufsize": 8192, 00:15:09.779 "large_bufsize": 135168 00:15:09.779 } 00:15:09.779 } 00:15:09.779 ] 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "subsystem": "sock", 00:15:09.779 "config": [ 00:15:09.779 { 00:15:09.779 "method": "sock_impl_set_options", 00:15:09.779 "params": { 00:15:09.779 "impl_name": "posix", 00:15:09.779 "recv_buf_size": 2097152, 00:15:09.779 "send_buf_size": 2097152, 00:15:09.779 "enable_recv_pipe": true, 00:15:09.779 "enable_quickack": false, 00:15:09.779 "enable_placement_id": 0, 00:15:09.779 "enable_zerocopy_send_server": true, 00:15:09.779 "enable_zerocopy_send_client": false, 00:15:09.779 "zerocopy_threshold": 0, 00:15:09.779 "tls_version": 0, 00:15:09.779 "enable_ktls": false 00:15:09.779 } 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "method": "sock_impl_set_options", 00:15:09.779 "params": { 00:15:09.779 "impl_name": "ssl", 00:15:09.779 "recv_buf_size": 4096, 00:15:09.779 "send_buf_size": 4096, 00:15:09.779 "enable_recv_pipe": true, 00:15:09.779 "enable_quickack": false, 00:15:09.779 "enable_placement_id": 0, 00:15:09.779 "enable_zerocopy_send_server": true, 00:15:09.779 "enable_zerocopy_send_client": false, 00:15:09.779 "zerocopy_threshold": 0, 00:15:09.779 "tls_version": 0, 00:15:09.779 "enable_ktls": false 00:15:09.779 } 00:15:09.779 } 00:15:09.779 ] 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "subsystem": "vmd", 00:15:09.779 "config": [] 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "subsystem": "accel", 00:15:09.779 "config": [ 00:15:09.779 { 00:15:09.779 "method": "accel_set_options", 00:15:09.779 "params": { 00:15:09.779 "small_cache_size": 128, 00:15:09.779 "large_cache_size": 16, 00:15:09.779 "task_count": 2048, 00:15:09.779 "sequence_count": 2048, 00:15:09.779 "buf_count": 2048 00:15:09.779 } 00:15:09.779 } 00:15:09.779 ] 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "subsystem": "bdev", 00:15:09.779 "config": [ 00:15:09.779 { 00:15:09.779 "method": "bdev_set_options", 00:15:09.779 "params": { 00:15:09.779 "bdev_io_pool_size": 65535, 00:15:09.779 "bdev_io_cache_size": 256, 00:15:09.779 "bdev_auto_examine": true, 00:15:09.779 "iobuf_small_cache_size": 128, 00:15:09.779 "iobuf_large_cache_size": 16 00:15:09.779 } 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "method": "bdev_raid_set_options", 00:15:09.779 "params": { 00:15:09.779 "process_window_size_kb": 1024 00:15:09.779 } 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "method": "bdev_iscsi_set_options", 00:15:09.779 "params": { 00:15:09.779 "timeout_sec": 30 00:15:09.779 } 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "method": "bdev_nvme_set_options", 00:15:09.779 "params": { 00:15:09.779 "action_on_timeout": "none", 00:15:09.779 "timeout_us": 0, 00:15:09.779 "timeout_admin_us": 0, 00:15:09.779 "keep_alive_timeout_ms": 10000, 00:15:09.779 "arbitration_burst": 0, 00:15:09.779 "low_priority_weight": 0, 00:15:09.779 "medium_priority_weight": 0, 00:15:09.779 "high_priority_weight": 0, 00:15:09.779 "nvme_adminq_poll_period_us": 10000, 00:15:09.779 "nvme_ioq_poll_period_us": 0, 00:15:09.779 "io_queue_requests": 0, 00:15:09.779 "delay_cmd_submit": true, 00:15:09.779 "transport_retry_count": 4, 00:15:09.779 "bdev_retry_count": 3, 00:15:09.779 "transport_ack_timeout": 0, 00:15:09.779 "ctrlr_loss_timeout_sec": 0, 00:15:09.779 "reconnect_delay_sec": 0, 00:15:09.779 "fast_io_fail_timeout_sec": 0, 00:15:09.779 "disable_auto_failback": false, 00:15:09.779 "generate_uuids": false, 00:15:09.779 "transport_tos": 0, 00:15:09.779 "nvme_error_stat": false, 00:15:09.779 "rdma_srq_size": 0, 00:15:09.779 "io_path_stat": false, 00:15:09.779 "allow_accel_sequence": false, 00:15:09.779 "rdma_max_cq_size": 0, 00:15:09.779 "rdma_cm_event_timeout_ms": 0, 00:15:09.779 "dhchap_digests": [ 00:15:09.779 "sha256", 00:15:09.779 "sha384", 00:15:09.779 "sha512" 00:15:09.779 ], 00:15:09.779 "dhchap_dhgroups": [ 00:15:09.779 "null", 00:15:09.779 "ffdhe2048", 00:15:09.779 "ffdhe3072", 00:15:09.779 "ffdhe4096", 00:15:09.779 "ffdhe6144", 00:15:09.779 "ffdhe8192" 00:15:09.779 ] 00:15:09.779 } 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "method": "bdev_nvme_set_hotplug", 00:15:09.779 "params": { 00:15:09.779 "period_us": 100000, 00:15:09.779 "enable": false 00:15:09.779 } 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "method": "bdev_malloc_create", 00:15:09.779 "params": { 00:15:09.779 "name": "malloc0", 00:15:09.779 "num_blocks": 8192, 00:15:09.779 "block_size": 4096, 00:15:09.779 "physical_block_size": 4096, 00:15:09.779 "uuid": "10a97898-3c27-41c4-bde5-428824f3649e", 00:15:09.779 "optimal_io_boundary": 0 00:15:09.779 } 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "method": "bdev_wait_for_examine" 00:15:09.779 } 00:15:09.779 ] 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "subsystem": "nbd", 00:15:09.779 "config": [] 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "subsystem": "scheduler", 00:15:09.779 "config": [ 00:15:09.779 { 00:15:09.779 "method": "framework_set_scheduler", 00:15:09.779 "params": { 00:15:09.779 "name": "static" 00:15:09.779 } 00:15:09.779 } 00:15:09.779 ] 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "subsystem": "nvmf", 00:15:09.779 "config": [ 00:15:09.779 { 00:15:09.779 "method": "nvmf_set_config", 00:15:09.779 "params": { 00:15:09.779 "discovery_filter": "match_any", 00:15:09.779 "admin_cmd_passthru": { 00:15:09.779 "identify_ctrlr": false 00:15:09.779 } 00:15:09.779 } 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "method": "nvmf_set_max_subsystems", 00:15:09.779 "params": { 00:15:09.779 "max_subsystems": 1024 00:15:09.779 } 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "method": "nvmf_set_crdt", 00:15:09.779 "params": { 00:15:09.779 "crdt1": 0, 00:15:09.779 "crdt2": 0, 00:15:09.779 "crdt3": 0 00:15:09.779 } 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "method": "nvmf_create_transport", 00:15:09.779 "params": { 00:15:09.779 "trtype": "TCP", 00:15:09.779 "max_queue_depth": 128, 00:15:09.779 "max_io_qpairs_per_ctrlr": 127, 00:15:09.779 "in_capsule_data_size": 4096, 00:15:09.779 "max_io_size": 131072, 00:15:09.779 "io_unit_size": 131072, 00:15:09.779 "max_aq_depth": 128, 00:15:09.779 "num_shared_buffers": 511, 00:15:09.779 "buf_cache_size": 4294967295, 00:15:09.779 "dif_insert_or_strip": false, 00:15:09.779 "zcopy": false, 00:15:09.779 "c2h_success": false, 00:15:09.779 "sock_priority": 0, 00:15:09.779 "abort_timeout_sec": 1, 00:15:09.779 "ack_timeout": 0 00:15:09.779 } 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "method": "nvmf_create_subsystem", 00:15:09.779 "params": { 00:15:09.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.779 "allow_any_host": false, 00:15:09.779 "serial_number": "00000000000000000000", 00:15:09.779 "model_number": "SPDK bdev Controller", 00:15:09.779 "max_namespaces": 32, 00:15:09.779 "min_cntlid": 1, 00:15:09.779 "max_cntlid": 65519, 00:15:09.779 "ana_reporting": false 00:15:09.779 } 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "method": "nvmf_subsystem_add_host", 00:15:09.779 "params": { 00:15:09.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.779 "host": "nqn.2016-06.io.spdk:host1", 00:15:09.779 "psk": "key0" 00:15:09.779 } 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "method": "nvmf_subsystem_add_ns", 00:15:09.779 "params": { 00:15:09.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.779 "namespace": { 00:15:09.779 "nsid": 1, 00:15:09.779 "bdev_name": "malloc0", 00:15:09.779 "nguid": "10A978983C2741C4BDE5428824F3649E", 00:15:09.779 "uuid": "10a97898-3c27-41c4-bde5-428824f3649e", 00:15:09.779 "no_auto_visible": false 00:15:09.779 } 00:15:09.779 } 00:15:09.779 }, 00:15:09.779 { 00:15:09.779 "method": "nvmf_subsystem_add_listener", 00:15:09.779 "params": { 00:15:09.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.779 "listen_address": { 00:15:09.779 "trtype": "TCP", 00:15:09.779 "adrfam": "IPv4", 00:15:09.779 "traddr": "10.0.0.2", 00:15:09.779 "trsvcid": "4420" 00:15:09.779 }, 00:15:09.779 "secure_channel": true 00:15:09.779 } 00:15:09.779 } 00:15:09.779 ] 00:15:09.779 } 00:15:09.779 ] 00:15:09.779 }' 00:15:09.779 13:44:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:09.779 13:44:12 -- common/autotest_common.sh@10 -- # set +x 00:15:09.779 13:44:12 -- nvmf/common.sh@470 -- # nvmfpid=2610759 00:15:09.779 13:44:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:09.779 13:44:12 -- nvmf/common.sh@471 -- # waitforlisten 2610759 00:15:09.779 13:44:12 -- common/autotest_common.sh@817 -- # '[' -z 2610759 ']' 00:15:09.779 13:44:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.779 13:44:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:09.779 13:44:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.779 13:44:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:09.779 13:44:12 -- common/autotest_common.sh@10 -- # set +x 00:15:09.779 [2024-04-18 13:44:12.551969] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:15:09.779 [2024-04-18 13:44:12.552051] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.779 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.038 [2024-04-18 13:44:12.620425] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.039 [2024-04-18 13:44:12.732298] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.039 [2024-04-18 13:44:12.732371] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.039 [2024-04-18 13:44:12.732387] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.039 [2024-04-18 13:44:12.732401] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.039 [2024-04-18 13:44:12.732413] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.039 [2024-04-18 13:44:12.732548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.297 [2024-04-18 13:44:12.974971] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.297 [2024-04-18 13:44:13.006993] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:10.297 [2024-04-18 13:44:13.017378] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.861 13:44:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:10.861 13:44:13 -- common/autotest_common.sh@850 -- # return 0 00:15:10.861 13:44:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:10.861 13:44:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:10.861 13:44:13 -- common/autotest_common.sh@10 -- # set +x 00:15:10.861 13:44:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.861 13:44:13 -- target/tls.sh@272 -- # bdevperf_pid=2610912 00:15:10.861 13:44:13 -- target/tls.sh@273 -- # waitforlisten 2610912 /var/tmp/bdevperf.sock 00:15:10.861 13:44:13 -- common/autotest_common.sh@817 -- # '[' -z 2610912 ']' 00:15:10.861 13:44:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:10.861 13:44:13 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:10.861 13:44:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:10.861 13:44:13 -- target/tls.sh@270 -- # echo '{ 00:15:10.861 "subsystems": [ 00:15:10.861 { 00:15:10.861 "subsystem": "keyring", 00:15:10.861 "config": [ 00:15:10.861 { 00:15:10.861 "method": "keyring_file_add_key", 00:15:10.861 "params": { 00:15:10.861 "name": "key0", 00:15:10.861 "path": "/tmp/tmp.1Zj0dXTB8P" 00:15:10.861 } 00:15:10.861 } 00:15:10.861 ] 00:15:10.861 }, 00:15:10.861 { 00:15:10.861 "subsystem": "iobuf", 00:15:10.861 "config": [ 00:15:10.861 { 00:15:10.861 "method": "iobuf_set_options", 00:15:10.861 "params": { 00:15:10.861 "small_pool_count": 8192, 00:15:10.861 "large_pool_count": 1024, 00:15:10.861 "small_bufsize": 8192, 00:15:10.861 "large_bufsize": 135168 00:15:10.861 } 00:15:10.861 } 00:15:10.861 ] 00:15:10.861 }, 00:15:10.861 { 00:15:10.861 "subsystem": "sock", 00:15:10.861 "config": [ 00:15:10.861 { 00:15:10.861 "method": "sock_impl_set_options", 00:15:10.861 "params": { 00:15:10.861 "impl_name": "posix", 00:15:10.861 "recv_buf_size": 2097152, 00:15:10.861 "send_buf_size": 2097152, 00:15:10.861 "enable_recv_pipe": true, 00:15:10.861 "enable_quickack": false, 00:15:10.861 "enable_placement_id": 0, 00:15:10.861 "enable_zerocopy_send_server": true, 00:15:10.861 "enable_zerocopy_send_client": false, 00:15:10.861 "zerocopy_threshold": 0, 00:15:10.861 "tls_version": 0, 00:15:10.861 "enable_ktls": false 00:15:10.861 } 00:15:10.861 }, 00:15:10.861 { 00:15:10.861 "method": "sock_impl_set_options", 00:15:10.861 "params": { 00:15:10.861 "impl_name": "ssl", 00:15:10.861 "recv_buf_size": 4096, 00:15:10.861 "send_buf_size": 4096, 00:15:10.861 "enable_recv_pipe": true, 00:15:10.861 "enable_quickack": false, 00:15:10.861 "enable_placement_id": 0, 00:15:10.861 "enable_zerocopy_send_server": true, 00:15:10.861 "enable_zerocopy_send_client": false, 00:15:10.861 "zerocopy_threshold": 0, 00:15:10.861 "tls_version": 0, 00:15:10.861 "enable_ktls": false 00:15:10.861 } 00:15:10.861 } 00:15:10.861 ] 00:15:10.861 }, 00:15:10.861 { 00:15:10.861 "subsystem": "vmd", 00:15:10.861 "config": [] 00:15:10.861 }, 00:15:10.861 { 00:15:10.861 "subsystem": "accel", 00:15:10.861 "config": [ 00:15:10.861 { 00:15:10.861 "method": "accel_set_options", 00:15:10.861 "params": { 00:15:10.861 "small_cache_size": 128, 00:15:10.861 "large_cache_size": 16, 00:15:10.861 "task_count": 2048, 00:15:10.861 "sequence_count": 2048, 00:15:10.861 "buf_count": 2048 00:15:10.861 } 00:15:10.861 } 00:15:10.861 ] 00:15:10.861 }, 00:15:10.861 { 00:15:10.861 "subsystem": "bdev", 00:15:10.861 "config": [ 00:15:10.861 { 00:15:10.861 "method": "bdev_set_options", 00:15:10.861 "params": { 00:15:10.861 "bdev_io_pool_size": 65535, 00:15:10.861 "bdev_io_cache_size": 256, 00:15:10.861 "bdev_auto_examine": true, 00:15:10.861 "iobuf_small_cache_size": 128, 00:15:10.861 "iobuf_large_cache_size": 16 00:15:10.861 } 00:15:10.861 }, 00:15:10.861 { 00:15:10.861 "method": "bdev_raid_set_options", 00:15:10.861 "params": { 00:15:10.861 "process_window_size_kb": 1024 00:15:10.861 } 00:15:10.861 }, 00:15:10.861 { 00:15:10.861 "method": "bdev_iscsi_set_options", 00:15:10.861 "params": { 00:15:10.861 "timeout_sec": 30 00:15:10.861 } 00:15:10.861 }, 00:15:10.861 { 00:15:10.861 "method": "bdev_nvme_set_options", 00:15:10.861 "params": { 00:15:10.861 "action_on_timeout": "none", 00:15:10.861 "timeout_us": 0, 00:15:10.861 "timeout_admin_us": 0, 00:15:10.861 "keep_alive_timeout_ms": 10000, 00:15:10.861 "arbitration_burst": 0, 00:15:10.861 "low_priority_weight": 0, 00:15:10.861 "medium_priority_weight": 0, 00:15:10.861 "high_priority_weight": 0, 00:15:10.861 "nvme_adminq_poll_period_us": 10000, 00:15:10.861 "nvme_ioq_poll_period_us": 0, 00:15:10.861 "io_queue_requests": 512, 00:15:10.861 "delay_cmd_submit": true, 00:15:10.861 "transport_retry_count": 4, 00:15:10.861 "bdev_retry_count": 3, 00:15:10.861 "transport_ack_timeout": 0, 00:15:10.861 "ctrlr_loss_timeout_sec": 0, 00:15:10.861 "reconnect_delay_sec": 0, 00:15:10.861 "fast_io_fail_timeout_sec": 0, 00:15:10.861 "disable_auto_failback": false, 00:15:10.861 "generate_uuids": false, 00:15:10.861 "transport_tos": 0, 00:15:10.861 "nvme_error_stat": false, 00:15:10.861 "rdma_srq_size": 0, 00:15:10.861 "io_path_stat": false, 00:15:10.861 "allow_accel_sequence": false, 00:15:10.861 "rdma_max_cq_size": 0, 00:15:10.861 "rdma_cm_event_timeout_ms": 0, 00:15:10.861 "dhchap_digests": [ 00:15:10.861 "sha256", 00:15:10.861 "sha384", 00:15:10.861 "sha512" 00:15:10.861 ], 00:15:10.861 "dhchap_dhgroups": [ 00:15:10.861 "null", 00:15:10.861 "ffdhe2048", 00:15:10.861 "ffdhe3072", 00:15:10.861 "ffdhe4 13:44:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:10.861 096", 00:15:10.861 "ffdhe6144", 00:15:10.861 "ffdhe8192" 00:15:10.861 ] 00:15:10.861 } 00:15:10.861 }, 00:15:10.861 { 00:15:10.861 "method": "bdev_nvme_attach_controller", 00:15:10.861 "params": { 00:15:10.861 "name": "nvme0", 00:15:10.861 "trtype": "TCP", 00:15:10.861 "adrfam": "IPv4", 00:15:10.861 "traddr": "10.0.0.2", 00:15:10.861 "trsvcid": "4420", 00:15:10.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.861 "prchk_reftag": false, 00:15:10.861 "prchk_guard": false, 00:15:10.861 "ctrlr_loss_timeout_sec": 0, 00:15:10.861 "reconnect_delay_sec": 0, 00:15:10.861 "fast_io_fail_timeout_sec": 0, 00:15:10.861 "psk": "key0", 00:15:10.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.861 "hdgst": false, 00:15:10.861 "ddgst": false 00:15:10.861 } 00:15:10.861 }, 00:15:10.861 { 00:15:10.861 "method": "bdev_nvme_set_hotplug", 00:15:10.861 "params": { 00:15:10.861 "period_us": 100000, 00:15:10.861 "enable": false 00:15:10.861 } 00:15:10.861 }, 00:15:10.861 { 00:15:10.861 "method": "bdev_enable_histogram", 00:15:10.861 "params": { 00:15:10.861 "name": "nvme0n1", 00:15:10.861 "enable": true 00:15:10.861 } 00:15:10.861 }, 00:15:10.861 { 00:15:10.861 "method": "bdev_wait_for_examine" 00:15:10.861 } 00:15:10.861 ] 00:15:10.861 }, 00:15:10.861 { 00:15:10.861 "subsystem": "nbd", 00:15:10.861 "config": [] 00:15:10.861 } 00:15:10.861 ] 00:15:10.861 }' 00:15:10.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:10.861 13:44:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:10.861 13:44:13 -- common/autotest_common.sh@10 -- # set +x 00:15:10.861 [2024-04-18 13:44:13.547184] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:15:10.861 [2024-04-18 13:44:13.547257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610912 ] 00:15:10.861 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.861 [2024-04-18 13:44:13.608732] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.120 [2024-04-18 13:44:13.723505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.120 [2024-04-18 13:44:13.897478] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:12.061 13:44:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:12.061 13:44:14 -- common/autotest_common.sh@850 -- # return 0 00:15:12.061 13:44:14 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:12.061 13:44:14 -- target/tls.sh@275 -- # jq -r '.[].name' 00:15:12.061 13:44:14 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.061 13:44:14 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:12.061 Running I/O for 1 seconds... 00:15:13.433 00:15:13.433 Latency(us) 00:15:13.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.433 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:13.433 Verification LBA range: start 0x0 length 0x2000 00:15:13.433 nvme0n1 : 1.03 2989.58 11.68 0.00 0.00 42285.03 8883.77 74953.77 00:15:13.433 =================================================================================================================== 00:15:13.433 Total : 2989.58 11.68 0.00 0.00 42285.03 8883.77 74953.77 00:15:13.433 0 00:15:13.433 13:44:15 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:15:13.433 13:44:15 -- target/tls.sh@279 -- # cleanup 00:15:13.433 13:44:15 -- target/tls.sh@15 -- # process_shm --id 0 00:15:13.433 13:44:15 -- common/autotest_common.sh@794 -- # type=--id 00:15:13.433 13:44:15 -- common/autotest_common.sh@795 -- # id=0 00:15:13.433 13:44:15 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:13.433 13:44:15 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:13.433 13:44:15 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:13.433 13:44:15 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:13.433 13:44:15 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:13.433 13:44:15 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:13.433 nvmf_trace.0 00:15:13.433 13:44:15 -- common/autotest_common.sh@809 -- # return 0 00:15:13.433 13:44:15 -- target/tls.sh@16 -- # killprocess 2610912 00:15:13.433 13:44:15 -- common/autotest_common.sh@936 -- # '[' -z 2610912 ']' 00:15:13.433 13:44:15 -- common/autotest_common.sh@940 -- # kill -0 2610912 00:15:13.433 13:44:15 -- common/autotest_common.sh@941 -- # uname 00:15:13.433 13:44:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:13.433 13:44:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2610912 00:15:13.433 13:44:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:13.433 13:44:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:13.433 13:44:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2610912' 00:15:13.433 killing process with pid 2610912 00:15:13.433 13:44:15 -- common/autotest_common.sh@955 -- # kill 2610912 00:15:13.433 Received shutdown signal, test time was about 1.000000 seconds 00:15:13.433 00:15:13.433 Latency(us) 00:15:13.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.433 =================================================================================================================== 00:15:13.433 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:13.433 13:44:15 -- common/autotest_common.sh@960 -- # wait 2610912 00:15:13.690 13:44:16 -- target/tls.sh@17 -- # nvmftestfini 00:15:13.690 13:44:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:13.690 13:44:16 -- nvmf/common.sh@117 -- # sync 00:15:13.690 13:44:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:13.690 13:44:16 -- nvmf/common.sh@120 -- # set +e 00:15:13.690 13:44:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:13.690 13:44:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:13.690 rmmod nvme_tcp 00:15:13.690 rmmod nvme_fabrics 00:15:13.690 rmmod nvme_keyring 00:15:13.690 13:44:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:13.690 13:44:16 -- nvmf/common.sh@124 -- # set -e 00:15:13.690 13:44:16 -- nvmf/common.sh@125 -- # return 0 00:15:13.690 13:44:16 -- nvmf/common.sh@478 -- # '[' -n 2610759 ']' 00:15:13.691 13:44:16 -- nvmf/common.sh@479 -- # killprocess 2610759 00:15:13.691 13:44:16 -- common/autotest_common.sh@936 -- # '[' -z 2610759 ']' 00:15:13.691 13:44:16 -- common/autotest_common.sh@940 -- # kill -0 2610759 00:15:13.691 13:44:16 -- common/autotest_common.sh@941 -- # uname 00:15:13.691 13:44:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:13.691 13:44:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2610759 00:15:13.691 13:44:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:13.691 13:44:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:13.691 13:44:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2610759' 00:15:13.691 killing process with pid 2610759 00:15:13.691 13:44:16 -- common/autotest_common.sh@955 -- # kill 2610759 00:15:13.691 13:44:16 -- common/autotest_common.sh@960 -- # wait 2610759 00:15:13.948 13:44:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:13.948 13:44:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:13.948 13:44:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:13.948 13:44:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:13.948 13:44:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:13.948 13:44:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.948 13:44:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.948 13:44:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.851 13:44:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:15.851 13:44:18 -- target/tls.sh@18 -- # rm -f /tmp/tmp.WurJC7ooUZ /tmp/tmp.TsE0JxKQq0 /tmp/tmp.1Zj0dXTB8P 00:15:16.112 00:15:16.112 real 1m23.301s 00:15:16.112 user 2m11.404s 00:15:16.112 sys 0m29.147s 00:15:16.112 13:44:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:16.112 13:44:18 -- common/autotest_common.sh@10 -- # set +x 00:15:16.112 ************************************ 00:15:16.112 END TEST nvmf_tls 00:15:16.112 ************************************ 00:15:16.112 13:44:18 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:16.112 13:44:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:16.112 13:44:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:16.112 13:44:18 -- common/autotest_common.sh@10 -- # set +x 00:15:16.112 ************************************ 00:15:16.112 START TEST nvmf_fips 00:15:16.112 ************************************ 00:15:16.113 13:44:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:16.113 * Looking for test storage... 00:15:16.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:15:16.113 13:44:18 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.113 13:44:18 -- nvmf/common.sh@7 -- # uname -s 00:15:16.113 13:44:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.113 13:44:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.113 13:44:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.113 13:44:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.113 13:44:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.113 13:44:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.113 13:44:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.113 13:44:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.113 13:44:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.113 13:44:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.113 13:44:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:16.113 13:44:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:16.113 13:44:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.113 13:44:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.113 13:44:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.113 13:44:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.113 13:44:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.113 13:44:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.113 13:44:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.113 13:44:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.113 13:44:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.113 13:44:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.113 13:44:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.113 13:44:18 -- paths/export.sh@5 -- # export PATH 00:15:16.113 13:44:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.113 13:44:18 -- nvmf/common.sh@47 -- # : 0 00:15:16.113 13:44:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.113 13:44:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.113 13:44:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.113 13:44:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.113 13:44:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.113 13:44:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.113 13:44:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.113 13:44:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.113 13:44:18 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.113 13:44:18 -- fips/fips.sh@89 -- # check_openssl_version 00:15:16.113 13:44:18 -- fips/fips.sh@83 -- # local target=3.0.0 00:15:16.113 13:44:18 -- fips/fips.sh@85 -- # openssl version 00:15:16.113 13:44:18 -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:16.113 13:44:18 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:16.113 13:44:18 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:16.113 13:44:18 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:16.113 13:44:18 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:16.113 13:44:18 -- scripts/common.sh@333 -- # IFS=.-: 00:15:16.113 13:44:18 -- scripts/common.sh@333 -- # read -ra ver1 00:15:16.113 13:44:18 -- scripts/common.sh@334 -- # IFS=.-: 00:15:16.113 13:44:18 -- scripts/common.sh@334 -- # read -ra ver2 00:15:16.113 13:44:18 -- scripts/common.sh@335 -- # local 'op=>=' 00:15:16.113 13:44:18 -- scripts/common.sh@337 -- # ver1_l=3 00:15:16.113 13:44:18 -- scripts/common.sh@338 -- # ver2_l=3 00:15:16.113 13:44:18 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:16.113 13:44:18 -- scripts/common.sh@341 -- # case "$op" in 00:15:16.113 13:44:18 -- scripts/common.sh@345 -- # : 1 00:15:16.113 13:44:18 -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:16.113 13:44:18 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.113 13:44:18 -- scripts/common.sh@362 -- # decimal 3 00:15:16.113 13:44:18 -- scripts/common.sh@350 -- # local d=3 00:15:16.113 13:44:18 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:16.113 13:44:18 -- scripts/common.sh@352 -- # echo 3 00:15:16.113 13:44:18 -- scripts/common.sh@362 -- # ver1[v]=3 00:15:16.113 13:44:18 -- scripts/common.sh@363 -- # decimal 3 00:15:16.113 13:44:18 -- scripts/common.sh@350 -- # local d=3 00:15:16.113 13:44:18 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:16.113 13:44:18 -- scripts/common.sh@352 -- # echo 3 00:15:16.113 13:44:18 -- scripts/common.sh@363 -- # ver2[v]=3 00:15:16.113 13:44:18 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:16.113 13:44:18 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:16.113 13:44:18 -- scripts/common.sh@361 -- # (( v++ )) 00:15:16.113 13:44:18 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.113 13:44:18 -- scripts/common.sh@362 -- # decimal 0 00:15:16.113 13:44:18 -- scripts/common.sh@350 -- # local d=0 00:15:16.113 13:44:18 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:16.113 13:44:18 -- scripts/common.sh@352 -- # echo 0 00:15:16.113 13:44:18 -- scripts/common.sh@362 -- # ver1[v]=0 00:15:16.113 13:44:18 -- scripts/common.sh@363 -- # decimal 0 00:15:16.113 13:44:18 -- scripts/common.sh@350 -- # local d=0 00:15:16.113 13:44:18 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:16.113 13:44:18 -- scripts/common.sh@352 -- # echo 0 00:15:16.113 13:44:18 -- scripts/common.sh@363 -- # ver2[v]=0 00:15:16.113 13:44:18 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:16.113 13:44:18 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:16.113 13:44:18 -- scripts/common.sh@361 -- # (( v++ )) 00:15:16.113 13:44:18 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.113 13:44:18 -- scripts/common.sh@362 -- # decimal 9 00:15:16.113 13:44:18 -- scripts/common.sh@350 -- # local d=9 00:15:16.113 13:44:18 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:16.113 13:44:18 -- scripts/common.sh@352 -- # echo 9 00:15:16.113 13:44:18 -- scripts/common.sh@362 -- # ver1[v]=9 00:15:16.113 13:44:18 -- scripts/common.sh@363 -- # decimal 0 00:15:16.113 13:44:18 -- scripts/common.sh@350 -- # local d=0 00:15:16.113 13:44:18 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:16.113 13:44:18 -- scripts/common.sh@352 -- # echo 0 00:15:16.113 13:44:18 -- scripts/common.sh@363 -- # ver2[v]=0 00:15:16.113 13:44:18 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:16.113 13:44:18 -- scripts/common.sh@364 -- # return 0 00:15:16.113 13:44:18 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:16.113 13:44:18 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:16.113 13:44:18 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:16.113 13:44:18 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:16.113 13:44:18 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:16.113 13:44:18 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:16.113 13:44:18 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:16.113 13:44:18 -- fips/fips.sh@113 -- # build_openssl_config 00:15:16.113 13:44:18 -- fips/fips.sh@37 -- # cat 00:15:16.113 13:44:18 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:16.113 13:44:18 -- fips/fips.sh@58 -- # cat - 00:15:16.113 13:44:18 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:16.113 13:44:18 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:16.113 13:44:18 -- fips/fips.sh@116 -- # mapfile -t providers 00:15:16.113 13:44:18 -- fips/fips.sh@116 -- # openssl list -providers 00:15:16.113 13:44:18 -- fips/fips.sh@116 -- # grep name 00:15:16.373 13:44:18 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:16.373 13:44:18 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:16.373 13:44:18 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:16.373 13:44:18 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:16.373 13:44:18 -- fips/fips.sh@127 -- # : 00:15:16.373 13:44:18 -- common/autotest_common.sh@638 -- # local es=0 00:15:16.373 13:44:18 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:16.373 13:44:18 -- common/autotest_common.sh@626 -- # local arg=openssl 00:15:16.373 13:44:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:16.373 13:44:18 -- common/autotest_common.sh@630 -- # type -t openssl 00:15:16.373 13:44:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:16.373 13:44:18 -- common/autotest_common.sh@632 -- # type -P openssl 00:15:16.373 13:44:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:16.373 13:44:18 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:15:16.373 13:44:18 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:15:16.373 13:44:18 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:15:16.373 Error setting digest 00:15:16.373 0052ED4A2F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:16.373 0052ED4A2F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:16.373 13:44:18 -- common/autotest_common.sh@641 -- # es=1 00:15:16.373 13:44:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:16.373 13:44:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:16.373 13:44:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:16.373 13:44:18 -- fips/fips.sh@130 -- # nvmftestinit 00:15:16.373 13:44:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:16.373 13:44:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.373 13:44:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:16.373 13:44:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:16.373 13:44:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:16.373 13:44:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.373 13:44:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.373 13:44:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.373 13:44:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:16.373 13:44:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:16.373 13:44:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:16.373 13:44:18 -- common/autotest_common.sh@10 -- # set +x 00:15:18.274 13:44:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:18.274 13:44:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:18.274 13:44:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:18.274 13:44:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:18.274 13:44:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:18.274 13:44:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:18.274 13:44:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:18.274 13:44:21 -- nvmf/common.sh@295 -- # net_devs=() 00:15:18.274 13:44:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:18.274 13:44:21 -- nvmf/common.sh@296 -- # e810=() 00:15:18.274 13:44:21 -- nvmf/common.sh@296 -- # local -ga e810 00:15:18.274 13:44:21 -- nvmf/common.sh@297 -- # x722=() 00:15:18.274 13:44:21 -- nvmf/common.sh@297 -- # local -ga x722 00:15:18.274 13:44:21 -- nvmf/common.sh@298 -- # mlx=() 00:15:18.274 13:44:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:18.274 13:44:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:18.274 13:44:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:18.274 13:44:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:18.274 13:44:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:18.274 13:44:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:18.274 13:44:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:18.274 13:44:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:18.274 13:44:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:18.274 13:44:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:18.274 13:44:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:18.274 13:44:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:18.274 13:44:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:18.274 13:44:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:18.274 13:44:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:18.274 13:44:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:18.274 13:44:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:18.274 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:18.274 13:44:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:18.274 13:44:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:18.274 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:18.274 13:44:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:18.274 13:44:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:18.274 13:44:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.274 13:44:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:18.274 13:44:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.274 13:44:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:18.274 Found net devices under 0000:84:00.0: cvl_0_0 00:15:18.274 13:44:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.274 13:44:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:18.274 13:44:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.274 13:44:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:18.274 13:44:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.274 13:44:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:18.274 Found net devices under 0000:84:00.1: cvl_0_1 00:15:18.274 13:44:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.274 13:44:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:18.274 13:44:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:18.274 13:44:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:18.274 13:44:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:18.274 13:44:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.274 13:44:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:18.274 13:44:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:18.274 13:44:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:18.274 13:44:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:18.274 13:44:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:18.274 13:44:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:18.274 13:44:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:18.274 13:44:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.274 13:44:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:18.274 13:44:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:18.274 13:44:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:18.274 13:44:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:18.532 13:44:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:18.532 13:44:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:18.532 13:44:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:18.532 13:44:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:18.532 13:44:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:18.532 13:44:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:18.532 13:44:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:18.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:15:18.532 00:15:18.532 --- 10.0.0.2 ping statistics --- 00:15:18.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.532 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:15:18.532 13:44:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:18.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:15:18.532 00:15:18.532 --- 10.0.0.1 ping statistics --- 00:15:18.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.532 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:15:18.532 13:44:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.532 13:44:21 -- nvmf/common.sh@411 -- # return 0 00:15:18.532 13:44:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:18.532 13:44:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.532 13:44:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:18.532 13:44:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:18.532 13:44:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.532 13:44:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:18.532 13:44:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:18.532 13:44:21 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:18.532 13:44:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:18.532 13:44:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:18.532 13:44:21 -- common/autotest_common.sh@10 -- # set +x 00:15:18.532 13:44:21 -- nvmf/common.sh@470 -- # nvmfpid=2613302 00:15:18.532 13:44:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:18.532 13:44:21 -- nvmf/common.sh@471 -- # waitforlisten 2613302 00:15:18.532 13:44:21 -- common/autotest_common.sh@817 -- # '[' -z 2613302 ']' 00:15:18.532 13:44:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.532 13:44:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:18.532 13:44:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.532 13:44:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:18.532 13:44:21 -- common/autotest_common.sh@10 -- # set +x 00:15:18.532 [2024-04-18 13:44:21.311758] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:15:18.532 [2024-04-18 13:44:21.311851] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.791 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.792 [2024-04-18 13:44:21.386506] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.792 [2024-04-18 13:44:21.506171] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.792 [2024-04-18 13:44:21.506245] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.792 [2024-04-18 13:44:21.506261] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.792 [2024-04-18 13:44:21.506275] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.792 [2024-04-18 13:44:21.506287] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.792 [2024-04-18 13:44:21.506325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.780 13:44:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:19.780 13:44:22 -- common/autotest_common.sh@850 -- # return 0 00:15:19.780 13:44:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:19.780 13:44:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:19.780 13:44:22 -- common/autotest_common.sh@10 -- # set +x 00:15:19.780 13:44:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.780 13:44:22 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:19.780 13:44:22 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:19.780 13:44:22 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:19.780 13:44:22 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:19.780 13:44:22 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:19.780 13:44:22 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:19.780 13:44:22 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:19.780 13:44:22 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:19.780 [2024-04-18 13:44:22.554107] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.780 [2024-04-18 13:44:22.570093] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:19.780 [2024-04-18 13:44:22.570344] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.037 [2024-04-18 13:44:22.601908] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:20.037 malloc0 00:15:20.037 13:44:22 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:20.037 13:44:22 -- fips/fips.sh@147 -- # bdevperf_pid=2613458 00:15:20.037 13:44:22 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:20.037 13:44:22 -- fips/fips.sh@148 -- # waitforlisten 2613458 /var/tmp/bdevperf.sock 00:15:20.037 13:44:22 -- common/autotest_common.sh@817 -- # '[' -z 2613458 ']' 00:15:20.037 13:44:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:20.037 13:44:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:20.037 13:44:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:20.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:20.037 13:44:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:20.037 13:44:22 -- common/autotest_common.sh@10 -- # set +x 00:15:20.037 [2024-04-18 13:44:22.687328] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:15:20.037 [2024-04-18 13:44:22.687404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2613458 ] 00:15:20.037 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.037 [2024-04-18 13:44:22.749236] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.294 [2024-04-18 13:44:22.868756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.294 13:44:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:20.294 13:44:22 -- common/autotest_common.sh@850 -- # return 0 00:15:20.294 13:44:22 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:20.551 [2024-04-18 13:44:23.206804] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:20.551 [2024-04-18 13:44:23.206947] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:20.551 TLSTESTn1 00:15:20.551 13:44:23 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:20.807 Running I/O for 10 seconds... 00:15:30.767 00:15:30.767 Latency(us) 00:15:30.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.767 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:30.767 Verification LBA range: start 0x0 length 0x2000 00:15:30.767 TLSTESTn1 : 10.03 3698.46 14.45 0.00 0.00 34543.08 10534.31 59030.95 00:15:30.767 =================================================================================================================== 00:15:30.767 Total : 3698.46 14.45 0.00 0.00 34543.08 10534.31 59030.95 00:15:30.767 0 00:15:30.767 13:44:33 -- fips/fips.sh@1 -- # cleanup 00:15:30.767 13:44:33 -- fips/fips.sh@15 -- # process_shm --id 0 00:15:30.767 13:44:33 -- common/autotest_common.sh@794 -- # type=--id 00:15:30.767 13:44:33 -- common/autotest_common.sh@795 -- # id=0 00:15:30.767 13:44:33 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:30.767 13:44:33 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:30.767 13:44:33 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:30.767 13:44:33 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:30.767 13:44:33 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:30.767 13:44:33 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:30.767 nvmf_trace.0 00:15:30.767 13:44:33 -- common/autotest_common.sh@809 -- # return 0 00:15:30.767 13:44:33 -- fips/fips.sh@16 -- # killprocess 2613458 00:15:30.767 13:44:33 -- common/autotest_common.sh@936 -- # '[' -z 2613458 ']' 00:15:30.767 13:44:33 -- common/autotest_common.sh@940 -- # kill -0 2613458 00:15:30.767 13:44:33 -- common/autotest_common.sh@941 -- # uname 00:15:30.767 13:44:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:30.767 13:44:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2613458 00:15:30.767 13:44:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:30.767 13:44:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:30.767 13:44:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2613458' 00:15:30.767 killing process with pid 2613458 00:15:30.767 13:44:33 -- common/autotest_common.sh@955 -- # kill 2613458 00:15:30.767 Received shutdown signal, test time was about 10.000000 seconds 00:15:30.767 00:15:30.767 Latency(us) 00:15:30.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.767 =================================================================================================================== 00:15:30.767 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:30.767 [2024-04-18 13:44:33.547480] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:30.767 13:44:33 -- common/autotest_common.sh@960 -- # wait 2613458 00:15:31.025 13:44:33 -- fips/fips.sh@17 -- # nvmftestfini 00:15:31.025 13:44:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:31.026 13:44:33 -- nvmf/common.sh@117 -- # sync 00:15:31.026 13:44:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:31.026 13:44:33 -- nvmf/common.sh@120 -- # set +e 00:15:31.026 13:44:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:31.026 13:44:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:31.026 rmmod nvme_tcp 00:15:31.284 rmmod nvme_fabrics 00:15:31.284 rmmod nvme_keyring 00:15:31.284 13:44:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:31.284 13:44:33 -- nvmf/common.sh@124 -- # set -e 00:15:31.284 13:44:33 -- nvmf/common.sh@125 -- # return 0 00:15:31.284 13:44:33 -- nvmf/common.sh@478 -- # '[' -n 2613302 ']' 00:15:31.284 13:44:33 -- nvmf/common.sh@479 -- # killprocess 2613302 00:15:31.284 13:44:33 -- common/autotest_common.sh@936 -- # '[' -z 2613302 ']' 00:15:31.284 13:44:33 -- common/autotest_common.sh@940 -- # kill -0 2613302 00:15:31.284 13:44:33 -- common/autotest_common.sh@941 -- # uname 00:15:31.284 13:44:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:31.285 13:44:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2613302 00:15:31.285 13:44:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:31.285 13:44:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:31.285 13:44:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2613302' 00:15:31.285 killing process with pid 2613302 00:15:31.285 13:44:33 -- common/autotest_common.sh@955 -- # kill 2613302 00:15:31.285 [2024-04-18 13:44:33.909457] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:31.285 13:44:33 -- common/autotest_common.sh@960 -- # wait 2613302 00:15:31.543 13:44:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:31.543 13:44:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:31.543 13:44:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:31.543 13:44:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:31.543 13:44:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:31.543 13:44:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.543 13:44:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.543 13:44:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.443 13:44:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:33.443 13:44:36 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:33.701 00:15:33.701 real 0m17.464s 00:15:33.701 user 0m21.226s 00:15:33.701 sys 0m6.892s 00:15:33.701 13:44:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:33.701 13:44:36 -- common/autotest_common.sh@10 -- # set +x 00:15:33.701 ************************************ 00:15:33.701 END TEST nvmf_fips 00:15:33.701 ************************************ 00:15:33.701 13:44:36 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:15:33.701 13:44:36 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:15:33.701 13:44:36 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:15:33.701 13:44:36 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:15:33.701 13:44:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:33.701 13:44:36 -- common/autotest_common.sh@10 -- # set +x 00:15:35.602 13:44:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:35.603 13:44:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:35.603 13:44:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:35.603 13:44:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:35.603 13:44:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:35.603 13:44:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:35.603 13:44:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:35.603 13:44:38 -- nvmf/common.sh@295 -- # net_devs=() 00:15:35.603 13:44:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:35.603 13:44:38 -- nvmf/common.sh@296 -- # e810=() 00:15:35.603 13:44:38 -- nvmf/common.sh@296 -- # local -ga e810 00:15:35.603 13:44:38 -- nvmf/common.sh@297 -- # x722=() 00:15:35.603 13:44:38 -- nvmf/common.sh@297 -- # local -ga x722 00:15:35.603 13:44:38 -- nvmf/common.sh@298 -- # mlx=() 00:15:35.603 13:44:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:35.603 13:44:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.603 13:44:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.603 13:44:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.603 13:44:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.603 13:44:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.603 13:44:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.603 13:44:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.603 13:44:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.603 13:44:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.603 13:44:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.603 13:44:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.603 13:44:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:35.603 13:44:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:35.603 13:44:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:35.603 13:44:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:35.603 13:44:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:35.603 13:44:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:35.603 13:44:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.603 13:44:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:35.603 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:35.603 13:44:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.603 13:44:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.603 13:44:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.603 13:44:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.603 13:44:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.603 13:44:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.603 13:44:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:35.603 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:35.603 13:44:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.603 13:44:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.603 13:44:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.603 13:44:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.603 13:44:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.603 13:44:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:35.603 13:44:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:35.603 13:44:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:35.603 13:44:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.603 13:44:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.603 13:44:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:35.603 13:44:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.603 13:44:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:35.603 Found net devices under 0000:84:00.0: cvl_0_0 00:15:35.603 13:44:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.603 13:44:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.603 13:44:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.603 13:44:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:35.603 13:44:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.603 13:44:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:35.603 Found net devices under 0000:84:00.1: cvl_0_1 00:15:35.603 13:44:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.603 13:44:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:35.603 13:44:38 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.603 13:44:38 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:15:35.603 13:44:38 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:15:35.603 13:44:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:35.603 13:44:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:35.603 13:44:38 -- common/autotest_common.sh@10 -- # set +x 00:15:35.861 ************************************ 00:15:35.861 START TEST nvmf_perf_adq 00:15:35.861 ************************************ 00:15:35.861 13:44:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:15:35.861 * Looking for test storage... 00:15:35.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:35.861 13:44:38 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:35.861 13:44:38 -- nvmf/common.sh@7 -- # uname -s 00:15:35.861 13:44:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.861 13:44:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.861 13:44:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.861 13:44:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.861 13:44:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.861 13:44:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.861 13:44:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.861 13:44:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.861 13:44:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.861 13:44:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.861 13:44:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:35.861 13:44:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:35.861 13:44:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.861 13:44:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.861 13:44:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:35.861 13:44:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.861 13:44:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:35.861 13:44:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.861 13:44:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.861 13:44:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.861 13:44:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.861 13:44:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.861 13:44:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.862 13:44:38 -- paths/export.sh@5 -- # export PATH 00:15:35.862 13:44:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.862 13:44:38 -- nvmf/common.sh@47 -- # : 0 00:15:35.862 13:44:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:35.862 13:44:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:35.862 13:44:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.862 13:44:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.862 13:44:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.862 13:44:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:35.862 13:44:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:35.862 13:44:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:35.862 13:44:38 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:15:35.862 13:44:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:35.862 13:44:38 -- common/autotest_common.sh@10 -- # set +x 00:15:37.764 13:44:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:37.764 13:44:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:37.764 13:44:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:37.764 13:44:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:37.764 13:44:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:37.764 13:44:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:37.764 13:44:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:37.764 13:44:40 -- nvmf/common.sh@295 -- # net_devs=() 00:15:37.764 13:44:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:37.764 13:44:40 -- nvmf/common.sh@296 -- # e810=() 00:15:37.764 13:44:40 -- nvmf/common.sh@296 -- # local -ga e810 00:15:37.764 13:44:40 -- nvmf/common.sh@297 -- # x722=() 00:15:37.764 13:44:40 -- nvmf/common.sh@297 -- # local -ga x722 00:15:37.764 13:44:40 -- nvmf/common.sh@298 -- # mlx=() 00:15:37.764 13:44:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:37.764 13:44:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.764 13:44:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.764 13:44:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.764 13:44:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.764 13:44:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.764 13:44:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.764 13:44:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.764 13:44:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.764 13:44:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.764 13:44:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.764 13:44:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.764 13:44:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:37.764 13:44:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:37.764 13:44:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:37.764 13:44:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:37.764 13:44:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:37.764 13:44:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:37.764 13:44:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.764 13:44:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:37.764 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:37.764 13:44:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.764 13:44:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.764 13:44:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.764 13:44:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.764 13:44:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.764 13:44:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.764 13:44:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:37.764 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:37.764 13:44:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.764 13:44:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.764 13:44:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.764 13:44:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.764 13:44:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.764 13:44:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:37.764 13:44:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:37.764 13:44:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:37.764 13:44:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.764 13:44:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.764 13:44:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:37.764 13:44:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.764 13:44:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:37.764 Found net devices under 0000:84:00.0: cvl_0_0 00:15:37.764 13:44:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.764 13:44:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.764 13:44:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.764 13:44:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:37.764 13:44:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.764 13:44:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:37.764 Found net devices under 0000:84:00.1: cvl_0_1 00:15:37.764 13:44:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.764 13:44:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:37.764 13:44:40 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:37.764 13:44:40 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:15:37.764 13:44:40 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:37.764 13:44:40 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:15:37.764 13:44:40 -- target/perf_adq.sh@52 -- # rmmod ice 00:15:38.333 13:44:41 -- target/perf_adq.sh@53 -- # modprobe ice 00:15:40.233 13:44:42 -- target/perf_adq.sh@54 -- # sleep 5 00:15:45.534 13:44:47 -- target/perf_adq.sh@67 -- # nvmftestinit 00:15:45.534 13:44:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:45.534 13:44:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.534 13:44:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:45.534 13:44:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:45.534 13:44:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:45.534 13:44:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.534 13:44:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.534 13:44:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.534 13:44:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:45.534 13:44:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:45.534 13:44:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:45.534 13:44:47 -- common/autotest_common.sh@10 -- # set +x 00:15:45.534 13:44:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:45.534 13:44:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:45.534 13:44:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:45.534 13:44:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:45.534 13:44:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:45.534 13:44:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:45.534 13:44:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:45.534 13:44:47 -- nvmf/common.sh@295 -- # net_devs=() 00:15:45.534 13:44:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:45.534 13:44:47 -- nvmf/common.sh@296 -- # e810=() 00:15:45.534 13:44:47 -- nvmf/common.sh@296 -- # local -ga e810 00:15:45.534 13:44:47 -- nvmf/common.sh@297 -- # x722=() 00:15:45.534 13:44:47 -- nvmf/common.sh@297 -- # local -ga x722 00:15:45.534 13:44:47 -- nvmf/common.sh@298 -- # mlx=() 00:15:45.534 13:44:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:45.534 13:44:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:45.534 13:44:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:45.534 13:44:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:45.534 13:44:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:45.534 13:44:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:45.534 13:44:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:45.534 13:44:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:45.534 13:44:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:45.534 13:44:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:45.534 13:44:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:45.534 13:44:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:45.534 13:44:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:45.534 13:44:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:45.534 13:44:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:45.534 13:44:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:45.535 13:44:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:45.535 13:44:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:45.535 13:44:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:45.535 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:45.535 13:44:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:45.535 13:44:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:45.535 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:45.535 13:44:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:45.535 13:44:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:45.535 13:44:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.535 13:44:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:45.535 13:44:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.535 13:44:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:45.535 Found net devices under 0000:84:00.0: cvl_0_0 00:15:45.535 13:44:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.535 13:44:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:45.535 13:44:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.535 13:44:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:45.535 13:44:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.535 13:44:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:45.535 Found net devices under 0000:84:00.1: cvl_0_1 00:15:45.535 13:44:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.535 13:44:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:45.535 13:44:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:45.535 13:44:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:45.535 13:44:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.535 13:44:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:45.535 13:44:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:45.535 13:44:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:45.535 13:44:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:45.535 13:44:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:45.535 13:44:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:45.535 13:44:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:45.535 13:44:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.535 13:44:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:45.535 13:44:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:45.535 13:44:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:45.535 13:44:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:45.535 13:44:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:45.535 13:44:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:45.535 13:44:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:45.535 13:44:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:45.535 13:44:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:45.535 13:44:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:45.535 13:44:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:45.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:15:45.535 00:15:45.535 --- 10.0.0.2 ping statistics --- 00:15:45.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.535 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:15:45.535 13:44:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:45.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:15:45.535 00:15:45.535 --- 10.0.0.1 ping statistics --- 00:15:45.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.535 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:15:45.535 13:44:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.535 13:44:47 -- nvmf/common.sh@411 -- # return 0 00:15:45.535 13:44:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:45.535 13:44:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.535 13:44:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:45.535 13:44:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.535 13:44:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:45.535 13:44:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:45.535 13:44:47 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:45.535 13:44:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:45.535 13:44:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:45.535 13:44:47 -- common/autotest_common.sh@10 -- # set +x 00:15:45.535 13:44:47 -- nvmf/common.sh@470 -- # nvmfpid=2619246 00:15:45.535 13:44:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:45.535 13:44:47 -- nvmf/common.sh@471 -- # waitforlisten 2619246 00:15:45.535 13:44:47 -- common/autotest_common.sh@817 -- # '[' -z 2619246 ']' 00:15:45.535 13:44:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.535 13:44:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:45.535 13:44:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.535 13:44:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:45.535 13:44:47 -- common/autotest_common.sh@10 -- # set +x 00:15:45.535 [2024-04-18 13:44:47.822657] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:15:45.535 [2024-04-18 13:44:47.822739] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.535 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.535 [2024-04-18 13:44:47.888056] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:45.535 [2024-04-18 13:44:47.995325] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.535 [2024-04-18 13:44:47.995382] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.535 [2024-04-18 13:44:47.995395] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:45.535 [2024-04-18 13:44:47.995406] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:45.535 [2024-04-18 13:44:47.995416] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.535 [2024-04-18 13:44:47.995486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.535 [2024-04-18 13:44:47.995567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.535 [2024-04-18 13:44:47.995633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:45.535 [2024-04-18 13:44:47.995635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.535 13:44:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:45.535 13:44:48 -- common/autotest_common.sh@850 -- # return 0 00:15:45.535 13:44:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:45.535 13:44:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:45.535 13:44:48 -- common/autotest_common.sh@10 -- # set +x 00:15:45.535 13:44:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:45.535 13:44:48 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:15:45.535 13:44:48 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:15:45.535 13:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.535 13:44:48 -- common/autotest_common.sh@10 -- # set +x 00:15:45.535 13:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.535 13:44:48 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:15:45.535 13:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.535 13:44:48 -- common/autotest_common.sh@10 -- # set +x 00:15:45.535 13:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.535 13:44:48 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:15:45.535 13:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.535 13:44:48 -- common/autotest_common.sh@10 -- # set +x 00:15:45.535 [2024-04-18 13:44:48.178929] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:45.535 13:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.535 13:44:48 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:45.535 13:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.535 13:44:48 -- common/autotest_common.sh@10 -- # set +x 00:15:45.535 Malloc1 00:15:45.535 13:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.535 13:44:48 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:45.536 13:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.536 13:44:48 -- common/autotest_common.sh@10 -- # set +x 00:15:45.536 13:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.536 13:44:48 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:45.536 13:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.536 13:44:48 -- common/autotest_common.sh@10 -- # set +x 00:15:45.536 13:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.536 13:44:48 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.536 13:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.536 13:44:48 -- common/autotest_common.sh@10 -- # set +x 00:15:45.536 [2024-04-18 13:44:48.230292] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.536 13:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.536 13:44:48 -- target/perf_adq.sh@73 -- # perfpid=2619387 00:15:45.536 13:44:48 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:45.536 13:44:48 -- target/perf_adq.sh@74 -- # sleep 2 00:15:45.536 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.439 13:44:50 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:15:47.439 13:44:50 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:15:47.439 13:44:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.439 13:44:50 -- target/perf_adq.sh@76 -- # wc -l 00:15:47.439 13:44:50 -- common/autotest_common.sh@10 -- # set +x 00:15:47.697 13:44:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.697 13:44:50 -- target/perf_adq.sh@76 -- # count=4 00:15:47.697 13:44:50 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:15:47.697 13:44:50 -- target/perf_adq.sh@81 -- # wait 2619387 00:15:55.813 Initializing NVMe Controllers 00:15:55.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:55.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:15:55.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:15:55.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:15:55.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:15:55.813 Initialization complete. Launching workers. 00:15:55.813 ======================================================== 00:15:55.813 Latency(us) 00:15:55.813 Device Information : IOPS MiB/s Average min max 00:15:55.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9907.40 38.70 6460.81 2400.33 10271.25 00:15:55.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10096.10 39.44 6339.42 2065.67 9212.33 00:15:55.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10198.90 39.84 6275.28 1475.32 9610.60 00:15:55.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10071.00 39.34 6355.43 2307.55 9340.21 00:15:55.813 ======================================================== 00:15:55.813 Total : 40273.39 157.32 6357.04 1475.32 10271.25 00:15:55.813 00:15:55.813 13:44:58 -- target/perf_adq.sh@82 -- # nvmftestfini 00:15:55.813 13:44:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:55.813 13:44:58 -- nvmf/common.sh@117 -- # sync 00:15:55.813 13:44:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:55.813 13:44:58 -- nvmf/common.sh@120 -- # set +e 00:15:55.813 13:44:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:55.813 13:44:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:55.813 rmmod nvme_tcp 00:15:55.813 rmmod nvme_fabrics 00:15:55.813 rmmod nvme_keyring 00:15:55.813 13:44:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:55.813 13:44:58 -- nvmf/common.sh@124 -- # set -e 00:15:55.813 13:44:58 -- nvmf/common.sh@125 -- # return 0 00:15:55.813 13:44:58 -- nvmf/common.sh@478 -- # '[' -n 2619246 ']' 00:15:55.813 13:44:58 -- nvmf/common.sh@479 -- # killprocess 2619246 00:15:55.813 13:44:58 -- common/autotest_common.sh@936 -- # '[' -z 2619246 ']' 00:15:55.813 13:44:58 -- common/autotest_common.sh@940 -- # kill -0 2619246 00:15:55.813 13:44:58 -- common/autotest_common.sh@941 -- # uname 00:15:55.813 13:44:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:55.813 13:44:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2619246 00:15:55.813 13:44:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:55.813 13:44:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:55.813 13:44:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2619246' 00:15:55.813 killing process with pid 2619246 00:15:55.813 13:44:58 -- common/autotest_common.sh@955 -- # kill 2619246 00:15:55.813 13:44:58 -- common/autotest_common.sh@960 -- # wait 2619246 00:15:56.072 13:44:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:56.072 13:44:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:56.072 13:44:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:56.072 13:44:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.072 13:44:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.072 13:44:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.072 13:44:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.072 13:44:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.979 13:45:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:57.979 13:45:00 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:15:57.979 13:45:00 -- target/perf_adq.sh@52 -- # rmmod ice 00:15:58.914 13:45:01 -- target/perf_adq.sh@53 -- # modprobe ice 00:16:00.289 13:45:02 -- target/perf_adq.sh@54 -- # sleep 5 00:16:05.561 13:45:07 -- target/perf_adq.sh@87 -- # nvmftestinit 00:16:05.561 13:45:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:05.561 13:45:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.561 13:45:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:05.561 13:45:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:05.561 13:45:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:05.561 13:45:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.561 13:45:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.561 13:45:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.561 13:45:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:05.561 13:45:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:05.561 13:45:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:05.561 13:45:07 -- common/autotest_common.sh@10 -- # set +x 00:16:05.561 13:45:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:05.561 13:45:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:05.561 13:45:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:05.561 13:45:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:05.561 13:45:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:05.561 13:45:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:05.561 13:45:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:05.561 13:45:07 -- nvmf/common.sh@295 -- # net_devs=() 00:16:05.561 13:45:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:05.561 13:45:07 -- nvmf/common.sh@296 -- # e810=() 00:16:05.561 13:45:07 -- nvmf/common.sh@296 -- # local -ga e810 00:16:05.561 13:45:07 -- nvmf/common.sh@297 -- # x722=() 00:16:05.561 13:45:07 -- nvmf/common.sh@297 -- # local -ga x722 00:16:05.561 13:45:07 -- nvmf/common.sh@298 -- # mlx=() 00:16:05.561 13:45:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:05.561 13:45:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:05.561 13:45:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:05.561 13:45:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:05.561 13:45:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:05.561 13:45:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:05.561 13:45:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:05.561 13:45:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:05.561 13:45:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:05.561 13:45:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:05.561 13:45:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:05.561 13:45:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:05.561 13:45:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:05.561 13:45:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:05.561 13:45:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:05.561 13:45:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:05.561 13:45:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:05.561 13:45:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:05.561 13:45:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:05.561 13:45:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:05.561 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:05.561 13:45:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:05.561 13:45:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:05.561 13:45:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:05.561 13:45:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:05.561 13:45:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:05.561 13:45:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:05.561 13:45:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:05.561 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:05.561 13:45:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:05.561 13:45:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:05.561 13:45:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:05.561 13:45:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:05.561 13:45:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:05.561 13:45:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:05.561 13:45:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:05.561 13:45:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:05.561 13:45:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:05.561 13:45:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.561 13:45:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:05.561 13:45:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.561 13:45:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:05.561 Found net devices under 0000:84:00.0: cvl_0_0 00:16:05.561 13:45:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.561 13:45:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:05.561 13:45:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.561 13:45:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:05.561 13:45:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.561 13:45:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:05.561 Found net devices under 0000:84:00.1: cvl_0_1 00:16:05.561 13:45:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.561 13:45:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:05.561 13:45:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:05.561 13:45:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:05.561 13:45:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:05.562 13:45:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:05.562 13:45:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.562 13:45:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:05.562 13:45:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:05.562 13:45:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:05.562 13:45:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:05.562 13:45:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:05.562 13:45:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:05.562 13:45:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:05.562 13:45:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.562 13:45:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:05.562 13:45:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:05.562 13:45:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:05.562 13:45:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:05.562 13:45:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:05.562 13:45:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:05.562 13:45:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:05.562 13:45:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:05.562 13:45:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:05.562 13:45:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:05.562 13:45:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:05.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:16:05.562 00:16:05.562 --- 10.0.0.2 ping statistics --- 00:16:05.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.562 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:16:05.562 13:45:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:05.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:16:05.562 00:16:05.562 --- 10.0.0.1 ping statistics --- 00:16:05.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.562 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:16:05.562 13:45:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.562 13:45:08 -- nvmf/common.sh@411 -- # return 0 00:16:05.562 13:45:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:05.562 13:45:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.562 13:45:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:05.562 13:45:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:05.562 13:45:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.562 13:45:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:05.562 13:45:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:05.562 13:45:08 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:16:05.562 13:45:08 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:16:05.562 13:45:08 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:16:05.562 13:45:08 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:16:05.562 net.core.busy_poll = 1 00:16:05.562 13:45:08 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:16:05.562 net.core.busy_read = 1 00:16:05.562 13:45:08 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:16:05.562 13:45:08 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:16:05.562 13:45:08 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:16:05.562 13:45:08 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:16:05.562 13:45:08 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:16:05.562 13:45:08 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:05.562 13:45:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:05.562 13:45:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:05.562 13:45:08 -- common/autotest_common.sh@10 -- # set +x 00:16:05.562 13:45:08 -- nvmf/common.sh@470 -- # nvmfpid=2622243 00:16:05.562 13:45:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:05.562 13:45:08 -- nvmf/common.sh@471 -- # waitforlisten 2622243 00:16:05.562 13:45:08 -- common/autotest_common.sh@817 -- # '[' -z 2622243 ']' 00:16:05.562 13:45:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.562 13:45:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:05.562 13:45:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.562 13:45:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:05.562 13:45:08 -- common/autotest_common.sh@10 -- # set +x 00:16:05.562 [2024-04-18 13:45:08.333013] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:05.562 [2024-04-18 13:45:08.333094] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.820 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.820 [2024-04-18 13:45:08.408614] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:05.820 [2024-04-18 13:45:08.531469] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.820 [2024-04-18 13:45:08.531542] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.820 [2024-04-18 13:45:08.531562] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.820 [2024-04-18 13:45:08.531575] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.820 [2024-04-18 13:45:08.531588] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.820 [2024-04-18 13:45:08.531662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.820 [2024-04-18 13:45:08.531724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.820 [2024-04-18 13:45:08.531846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.820 [2024-04-18 13:45:08.531849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.760 13:45:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:06.760 13:45:09 -- common/autotest_common.sh@850 -- # return 0 00:16:06.760 13:45:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:06.760 13:45:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:06.760 13:45:09 -- common/autotest_common.sh@10 -- # set +x 00:16:06.760 13:45:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.760 13:45:09 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:16:06.760 13:45:09 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:16:06.760 13:45:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.760 13:45:09 -- common/autotest_common.sh@10 -- # set +x 00:16:06.760 13:45:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.760 13:45:09 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:16:06.760 13:45:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.761 13:45:09 -- common/autotest_common.sh@10 -- # set +x 00:16:06.761 13:45:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.761 13:45:09 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:16:06.761 13:45:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.761 13:45:09 -- common/autotest_common.sh@10 -- # set +x 00:16:06.761 [2024-04-18 13:45:09.433710] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.761 13:45:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.761 13:45:09 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:06.761 13:45:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.761 13:45:09 -- common/autotest_common.sh@10 -- # set +x 00:16:06.761 Malloc1 00:16:06.761 13:45:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.761 13:45:09 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:06.761 13:45:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.761 13:45:09 -- common/autotest_common.sh@10 -- # set +x 00:16:06.761 13:45:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.761 13:45:09 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:06.761 13:45:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.761 13:45:09 -- common/autotest_common.sh@10 -- # set +x 00:16:06.761 13:45:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.761 13:45:09 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:06.761 13:45:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.761 13:45:09 -- common/autotest_common.sh@10 -- # set +x 00:16:06.761 [2024-04-18 13:45:09.485436] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.761 13:45:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.761 13:45:09 -- target/perf_adq.sh@94 -- # perfpid=2622648 00:16:06.761 13:45:09 -- target/perf_adq.sh@95 -- # sleep 2 00:16:06.761 13:45:09 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:06.761 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.718 13:45:11 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:16:08.718 13:45:11 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:16:08.719 13:45:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.719 13:45:11 -- common/autotest_common.sh@10 -- # set +x 00:16:08.719 13:45:11 -- target/perf_adq.sh@97 -- # wc -l 00:16:08.719 13:45:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.976 13:45:11 -- target/perf_adq.sh@97 -- # count=2 00:16:08.976 13:45:11 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:16:08.976 13:45:11 -- target/perf_adq.sh@103 -- # wait 2622648 00:16:17.097 Initializing NVMe Controllers 00:16:17.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:17.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:16:17.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:16:17.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:16:17.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:16:17.097 Initialization complete. Launching workers. 00:16:17.097 ======================================================== 00:16:17.097 Latency(us) 00:16:17.097 Device Information : IOPS MiB/s Average min max 00:16:17.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6866.20 26.82 9354.42 1542.55 53470.09 00:16:17.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6386.40 24.95 10053.88 1859.16 53683.13 00:16:17.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6809.30 26.60 9399.23 1824.70 54634.23 00:16:17.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6552.30 25.59 9802.61 1732.89 54731.86 00:16:17.097 ======================================================== 00:16:17.097 Total : 26614.19 103.96 9644.07 1542.55 54731.86 00:16:17.097 00:16:17.097 13:45:19 -- target/perf_adq.sh@104 -- # nvmftestfini 00:16:17.097 13:45:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:17.097 13:45:19 -- nvmf/common.sh@117 -- # sync 00:16:17.097 13:45:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:17.097 13:45:19 -- nvmf/common.sh@120 -- # set +e 00:16:17.097 13:45:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:17.097 13:45:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:17.097 rmmod nvme_tcp 00:16:17.097 rmmod nvme_fabrics 00:16:17.097 rmmod nvme_keyring 00:16:17.097 13:45:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:17.097 13:45:19 -- nvmf/common.sh@124 -- # set -e 00:16:17.097 13:45:19 -- nvmf/common.sh@125 -- # return 0 00:16:17.097 13:45:19 -- nvmf/common.sh@478 -- # '[' -n 2622243 ']' 00:16:17.097 13:45:19 -- nvmf/common.sh@479 -- # killprocess 2622243 00:16:17.097 13:45:19 -- common/autotest_common.sh@936 -- # '[' -z 2622243 ']' 00:16:17.097 13:45:19 -- common/autotest_common.sh@940 -- # kill -0 2622243 00:16:17.097 13:45:19 -- common/autotest_common.sh@941 -- # uname 00:16:17.097 13:45:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:17.097 13:45:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2622243 00:16:17.097 13:45:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:17.097 13:45:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:17.097 13:45:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2622243' 00:16:17.097 killing process with pid 2622243 00:16:17.097 13:45:19 -- common/autotest_common.sh@955 -- # kill 2622243 00:16:17.097 13:45:19 -- common/autotest_common.sh@960 -- # wait 2622243 00:16:17.357 13:45:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:17.357 13:45:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:17.357 13:45:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:17.357 13:45:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:17.357 13:45:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:17.357 13:45:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.357 13:45:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.357 13:45:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.897 13:45:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:19.897 13:45:22 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:16:19.897 00:16:19.897 real 0m43.691s 00:16:19.897 user 2m42.752s 00:16:19.897 sys 0m9.738s 00:16:19.897 13:45:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:19.897 13:45:22 -- common/autotest_common.sh@10 -- # set +x 00:16:19.897 ************************************ 00:16:19.897 END TEST nvmf_perf_adq 00:16:19.897 ************************************ 00:16:19.897 13:45:22 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:19.897 13:45:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:19.897 13:45:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.897 13:45:22 -- common/autotest_common.sh@10 -- # set +x 00:16:19.897 ************************************ 00:16:19.897 START TEST nvmf_shutdown 00:16:19.897 ************************************ 00:16:19.897 13:45:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:19.897 * Looking for test storage... 00:16:19.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:19.897 13:45:22 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.897 13:45:22 -- nvmf/common.sh@7 -- # uname -s 00:16:19.897 13:45:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.897 13:45:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.897 13:45:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.897 13:45:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.897 13:45:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.897 13:45:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.897 13:45:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.897 13:45:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.897 13:45:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.897 13:45:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.897 13:45:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:19.897 13:45:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:19.897 13:45:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.897 13:45:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.897 13:45:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.897 13:45:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.897 13:45:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:19.897 13:45:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.897 13:45:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.897 13:45:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.897 13:45:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.897 13:45:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.897 13:45:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.897 13:45:22 -- paths/export.sh@5 -- # export PATH 00:16:19.897 13:45:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.897 13:45:22 -- nvmf/common.sh@47 -- # : 0 00:16:19.897 13:45:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.897 13:45:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.897 13:45:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.897 13:45:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.897 13:45:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.897 13:45:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.897 13:45:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.897 13:45:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.897 13:45:22 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:19.897 13:45:22 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:19.897 13:45:22 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:16:19.897 13:45:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:19.897 13:45:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.897 13:45:22 -- common/autotest_common.sh@10 -- # set +x 00:16:19.897 ************************************ 00:16:19.897 START TEST nvmf_shutdown_tc1 00:16:19.897 ************************************ 00:16:19.897 13:45:22 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:16:19.897 13:45:22 -- target/shutdown.sh@74 -- # starttarget 00:16:19.897 13:45:22 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:19.897 13:45:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:19.897 13:45:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.897 13:45:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:19.897 13:45:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:19.897 13:45:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:19.897 13:45:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.897 13:45:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.897 13:45:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.897 13:45:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:19.897 13:45:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:19.897 13:45:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:19.897 13:45:22 -- common/autotest_common.sh@10 -- # set +x 00:16:21.802 13:45:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:21.802 13:45:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:21.802 13:45:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:21.802 13:45:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:21.802 13:45:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:21.802 13:45:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:21.802 13:45:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:21.802 13:45:24 -- nvmf/common.sh@295 -- # net_devs=() 00:16:21.802 13:45:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:21.802 13:45:24 -- nvmf/common.sh@296 -- # e810=() 00:16:21.802 13:45:24 -- nvmf/common.sh@296 -- # local -ga e810 00:16:21.802 13:45:24 -- nvmf/common.sh@297 -- # x722=() 00:16:21.802 13:45:24 -- nvmf/common.sh@297 -- # local -ga x722 00:16:21.802 13:45:24 -- nvmf/common.sh@298 -- # mlx=() 00:16:21.802 13:45:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:21.802 13:45:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:21.802 13:45:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:21.802 13:45:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:21.802 13:45:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:21.802 13:45:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:21.802 13:45:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:21.802 13:45:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:21.802 13:45:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:21.802 13:45:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:21.802 13:45:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:21.802 13:45:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:21.802 13:45:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:21.802 13:45:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:21.802 13:45:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:21.802 13:45:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.802 13:45:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:21.802 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:21.802 13:45:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.802 13:45:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:21.802 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:21.802 13:45:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:21.802 13:45:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.802 13:45:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.802 13:45:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:21.802 13:45:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.802 13:45:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:21.802 Found net devices under 0000:84:00.0: cvl_0_0 00:16:21.802 13:45:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.802 13:45:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.802 13:45:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.802 13:45:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:21.802 13:45:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.802 13:45:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:21.802 Found net devices under 0000:84:00.1: cvl_0_1 00:16:21.802 13:45:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.802 13:45:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:21.802 13:45:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:21.802 13:45:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:21.802 13:45:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.802 13:45:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.802 13:45:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:21.802 13:45:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:21.802 13:45:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:21.802 13:45:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:21.802 13:45:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:21.802 13:45:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:21.802 13:45:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.802 13:45:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:21.802 13:45:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:21.802 13:45:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:21.802 13:45:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:21.802 13:45:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:21.802 13:45:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:21.802 13:45:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:21.802 13:45:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:21.802 13:45:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:21.802 13:45:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:21.802 13:45:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:21.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:16:21.802 00:16:21.802 --- 10.0.0.2 ping statistics --- 00:16:21.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.802 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:16:21.802 13:45:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:21.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:16:21.802 00:16:21.802 --- 10.0.0.1 ping statistics --- 00:16:21.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.802 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:21.802 13:45:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.802 13:45:24 -- nvmf/common.sh@411 -- # return 0 00:16:21.802 13:45:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:21.802 13:45:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.802 13:45:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:21.802 13:45:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:21.803 13:45:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.803 13:45:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:21.803 13:45:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:21.803 13:45:24 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:21.803 13:45:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:21.803 13:45:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:21.803 13:45:24 -- common/autotest_common.sh@10 -- # set +x 00:16:21.803 13:45:24 -- nvmf/common.sh@470 -- # nvmfpid=2625967 00:16:21.803 13:45:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:21.803 13:45:24 -- nvmf/common.sh@471 -- # waitforlisten 2625967 00:16:21.803 13:45:24 -- common/autotest_common.sh@817 -- # '[' -z 2625967 ']' 00:16:21.803 13:45:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.803 13:45:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:21.803 13:45:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.803 13:45:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:21.803 13:45:24 -- common/autotest_common.sh@10 -- # set +x 00:16:21.803 [2024-04-18 13:45:24.537730] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:21.803 [2024-04-18 13:45:24.537800] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.803 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.803 [2024-04-18 13:45:24.603359] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.061 [2024-04-18 13:45:24.713928] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.061 [2024-04-18 13:45:24.713992] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.061 [2024-04-18 13:45:24.714005] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.061 [2024-04-18 13:45:24.714016] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.061 [2024-04-18 13:45:24.714033] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.061 [2024-04-18 13:45:24.714124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.061 [2024-04-18 13:45:24.714197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.061 [2024-04-18 13:45:24.714236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:22.061 [2024-04-18 13:45:24.714239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.061 13:45:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:22.061 13:45:24 -- common/autotest_common.sh@850 -- # return 0 00:16:22.061 13:45:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:22.061 13:45:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:22.061 13:45:24 -- common/autotest_common.sh@10 -- # set +x 00:16:22.320 13:45:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.320 13:45:24 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:22.320 13:45:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.320 13:45:24 -- common/autotest_common.sh@10 -- # set +x 00:16:22.320 [2024-04-18 13:45:24.877060] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.320 13:45:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.320 13:45:24 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:22.320 13:45:24 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:22.320 13:45:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:22.320 13:45:24 -- common/autotest_common.sh@10 -- # set +x 00:16:22.320 13:45:24 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:22.320 13:45:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.320 13:45:24 -- target/shutdown.sh@28 -- # cat 00:16:22.320 13:45:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.320 13:45:24 -- target/shutdown.sh@28 -- # cat 00:16:22.320 13:45:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.320 13:45:24 -- target/shutdown.sh@28 -- # cat 00:16:22.320 13:45:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.320 13:45:24 -- target/shutdown.sh@28 -- # cat 00:16:22.320 13:45:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.320 13:45:24 -- target/shutdown.sh@28 -- # cat 00:16:22.320 13:45:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.320 13:45:24 -- target/shutdown.sh@28 -- # cat 00:16:22.320 13:45:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.320 13:45:24 -- target/shutdown.sh@28 -- # cat 00:16:22.320 13:45:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.320 13:45:24 -- target/shutdown.sh@28 -- # cat 00:16:22.320 13:45:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.320 13:45:24 -- target/shutdown.sh@28 -- # cat 00:16:22.320 13:45:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:22.320 13:45:24 -- target/shutdown.sh@28 -- # cat 00:16:22.320 13:45:24 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:22.320 13:45:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.320 13:45:24 -- common/autotest_common.sh@10 -- # set +x 00:16:22.320 Malloc1 00:16:22.320 [2024-04-18 13:45:24.967104] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.320 Malloc2 00:16:22.320 Malloc3 00:16:22.320 Malloc4 00:16:22.578 Malloc5 00:16:22.578 Malloc6 00:16:22.578 Malloc7 00:16:22.578 Malloc8 00:16:22.578 Malloc9 00:16:22.836 Malloc10 00:16:22.836 13:45:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.836 13:45:25 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:22.836 13:45:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:22.836 13:45:25 -- common/autotest_common.sh@10 -- # set +x 00:16:22.836 13:45:25 -- target/shutdown.sh@78 -- # perfpid=2626064 00:16:22.836 13:45:25 -- target/shutdown.sh@79 -- # waitforlisten 2626064 /var/tmp/bdevperf.sock 00:16:22.836 13:45:25 -- common/autotest_common.sh@817 -- # '[' -z 2626064 ']' 00:16:22.836 13:45:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:22.836 13:45:25 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:16:22.836 13:45:25 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:22.836 13:45:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:22.836 13:45:25 -- nvmf/common.sh@521 -- # config=() 00:16:22.836 13:45:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:22.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:22.836 13:45:25 -- nvmf/common.sh@521 -- # local subsystem config 00:16:22.836 13:45:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:22.836 13:45:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:22.836 13:45:25 -- common/autotest_common.sh@10 -- # set +x 00:16:22.836 13:45:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:22.836 { 00:16:22.836 "params": { 00:16:22.836 "name": "Nvme$subsystem", 00:16:22.836 "trtype": "$TEST_TRANSPORT", 00:16:22.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:22.836 "adrfam": "ipv4", 00:16:22.836 "trsvcid": "$NVMF_PORT", 00:16:22.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:22.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:22.836 "hdgst": ${hdgst:-false}, 00:16:22.836 "ddgst": ${ddgst:-false} 00:16:22.836 }, 00:16:22.836 "method": "bdev_nvme_attach_controller" 00:16:22.836 } 00:16:22.836 EOF 00:16:22.836 )") 00:16:22.836 13:45:25 -- nvmf/common.sh@543 -- # cat 00:16:22.836 13:45:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:22.836 13:45:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:22.836 { 00:16:22.836 "params": { 00:16:22.836 "name": "Nvme$subsystem", 00:16:22.836 "trtype": "$TEST_TRANSPORT", 00:16:22.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:22.836 "adrfam": "ipv4", 00:16:22.836 "trsvcid": "$NVMF_PORT", 00:16:22.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:22.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:22.836 "hdgst": ${hdgst:-false}, 00:16:22.836 "ddgst": ${ddgst:-false} 00:16:22.836 }, 00:16:22.836 "method": "bdev_nvme_attach_controller" 00:16:22.836 } 00:16:22.836 EOF 00:16:22.836 )") 00:16:22.836 13:45:25 -- nvmf/common.sh@543 -- # cat 00:16:22.836 13:45:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:22.836 13:45:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:22.836 { 00:16:22.836 "params": { 00:16:22.836 "name": "Nvme$subsystem", 00:16:22.837 "trtype": "$TEST_TRANSPORT", 00:16:22.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:22.837 "adrfam": "ipv4", 00:16:22.837 "trsvcid": "$NVMF_PORT", 00:16:22.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:22.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:22.837 "hdgst": ${hdgst:-false}, 00:16:22.837 "ddgst": ${ddgst:-false} 00:16:22.837 }, 00:16:22.837 "method": "bdev_nvme_attach_controller" 00:16:22.837 } 00:16:22.837 EOF 00:16:22.837 )") 00:16:22.837 13:45:25 -- nvmf/common.sh@543 -- # cat 00:16:22.837 13:45:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:22.837 13:45:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:22.837 { 00:16:22.837 "params": { 00:16:22.837 "name": "Nvme$subsystem", 00:16:22.837 "trtype": "$TEST_TRANSPORT", 00:16:22.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:22.837 "adrfam": "ipv4", 00:16:22.837 "trsvcid": "$NVMF_PORT", 00:16:22.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:22.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:22.837 "hdgst": ${hdgst:-false}, 00:16:22.837 "ddgst": ${ddgst:-false} 00:16:22.837 }, 00:16:22.837 "method": "bdev_nvme_attach_controller" 00:16:22.837 } 00:16:22.837 EOF 00:16:22.837 )") 00:16:22.837 13:45:25 -- nvmf/common.sh@543 -- # cat 00:16:22.837 13:45:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:22.837 13:45:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:22.837 { 00:16:22.837 "params": { 00:16:22.837 "name": "Nvme$subsystem", 00:16:22.837 "trtype": "$TEST_TRANSPORT", 00:16:22.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:22.837 "adrfam": "ipv4", 00:16:22.837 "trsvcid": "$NVMF_PORT", 00:16:22.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:22.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:22.837 "hdgst": ${hdgst:-false}, 00:16:22.837 "ddgst": ${ddgst:-false} 00:16:22.837 }, 00:16:22.837 "method": "bdev_nvme_attach_controller" 00:16:22.837 } 00:16:22.837 EOF 00:16:22.837 )") 00:16:22.837 13:45:25 -- nvmf/common.sh@543 -- # cat 00:16:22.837 13:45:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:22.837 13:45:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:22.837 { 00:16:22.837 "params": { 00:16:22.837 "name": "Nvme$subsystem", 00:16:22.837 "trtype": "$TEST_TRANSPORT", 00:16:22.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:22.837 "adrfam": "ipv4", 00:16:22.837 "trsvcid": "$NVMF_PORT", 00:16:22.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:22.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:22.837 "hdgst": ${hdgst:-false}, 00:16:22.837 "ddgst": ${ddgst:-false} 00:16:22.837 }, 00:16:22.837 "method": "bdev_nvme_attach_controller" 00:16:22.837 } 00:16:22.837 EOF 00:16:22.837 )") 00:16:22.837 13:45:25 -- nvmf/common.sh@543 -- # cat 00:16:22.837 13:45:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:22.837 13:45:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:22.837 { 00:16:22.837 "params": { 00:16:22.837 "name": "Nvme$subsystem", 00:16:22.837 "trtype": "$TEST_TRANSPORT", 00:16:22.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:22.837 "adrfam": "ipv4", 00:16:22.837 "trsvcid": "$NVMF_PORT", 00:16:22.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:22.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:22.837 "hdgst": ${hdgst:-false}, 00:16:22.837 "ddgst": ${ddgst:-false} 00:16:22.837 }, 00:16:22.837 "method": "bdev_nvme_attach_controller" 00:16:22.837 } 00:16:22.837 EOF 00:16:22.837 )") 00:16:22.837 13:45:25 -- nvmf/common.sh@543 -- # cat 00:16:22.837 13:45:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:22.837 13:45:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:22.837 { 00:16:22.837 "params": { 00:16:22.837 "name": "Nvme$subsystem", 00:16:22.837 "trtype": "$TEST_TRANSPORT", 00:16:22.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:22.837 "adrfam": "ipv4", 00:16:22.837 "trsvcid": "$NVMF_PORT", 00:16:22.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:22.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:22.837 "hdgst": ${hdgst:-false}, 00:16:22.837 "ddgst": ${ddgst:-false} 00:16:22.837 }, 00:16:22.837 "method": "bdev_nvme_attach_controller" 00:16:22.837 } 00:16:22.837 EOF 00:16:22.837 )") 00:16:22.837 13:45:25 -- nvmf/common.sh@543 -- # cat 00:16:22.837 13:45:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:22.837 13:45:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:22.837 { 00:16:22.837 "params": { 00:16:22.837 "name": "Nvme$subsystem", 00:16:22.837 "trtype": "$TEST_TRANSPORT", 00:16:22.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:22.837 "adrfam": "ipv4", 00:16:22.837 "trsvcid": "$NVMF_PORT", 00:16:22.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:22.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:22.837 "hdgst": ${hdgst:-false}, 00:16:22.837 "ddgst": ${ddgst:-false} 00:16:22.837 }, 00:16:22.837 "method": "bdev_nvme_attach_controller" 00:16:22.837 } 00:16:22.837 EOF 00:16:22.837 )") 00:16:22.837 13:45:25 -- nvmf/common.sh@543 -- # cat 00:16:22.837 13:45:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:22.837 13:45:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:22.837 { 00:16:22.837 "params": { 00:16:22.837 "name": "Nvme$subsystem", 00:16:22.837 "trtype": "$TEST_TRANSPORT", 00:16:22.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:22.837 "adrfam": "ipv4", 00:16:22.837 "trsvcid": "$NVMF_PORT", 00:16:22.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:22.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:22.837 "hdgst": ${hdgst:-false}, 00:16:22.837 "ddgst": ${ddgst:-false} 00:16:22.837 }, 00:16:22.837 "method": "bdev_nvme_attach_controller" 00:16:22.837 } 00:16:22.837 EOF 00:16:22.837 )") 00:16:22.837 13:45:25 -- nvmf/common.sh@543 -- # cat 00:16:22.837 13:45:25 -- nvmf/common.sh@545 -- # jq . 00:16:22.837 13:45:25 -- nvmf/common.sh@546 -- # IFS=, 00:16:22.837 13:45:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:22.837 "params": { 00:16:22.837 "name": "Nvme1", 00:16:22.837 "trtype": "tcp", 00:16:22.837 "traddr": "10.0.0.2", 00:16:22.837 "adrfam": "ipv4", 00:16:22.837 "trsvcid": "4420", 00:16:22.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:22.837 "hdgst": false, 00:16:22.837 "ddgst": false 00:16:22.837 }, 00:16:22.837 "method": "bdev_nvme_attach_controller" 00:16:22.837 },{ 00:16:22.837 "params": { 00:16:22.837 "name": "Nvme2", 00:16:22.837 "trtype": "tcp", 00:16:22.837 "traddr": "10.0.0.2", 00:16:22.837 "adrfam": "ipv4", 00:16:22.837 "trsvcid": "4420", 00:16:22.837 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:22.837 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:22.837 "hdgst": false, 00:16:22.837 "ddgst": false 00:16:22.837 }, 00:16:22.837 "method": "bdev_nvme_attach_controller" 00:16:22.837 },{ 00:16:22.837 "params": { 00:16:22.837 "name": "Nvme3", 00:16:22.837 "trtype": "tcp", 00:16:22.837 "traddr": "10.0.0.2", 00:16:22.837 "adrfam": "ipv4", 00:16:22.837 "trsvcid": "4420", 00:16:22.837 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:22.837 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:22.837 "hdgst": false, 00:16:22.837 "ddgst": false 00:16:22.837 }, 00:16:22.837 "method": "bdev_nvme_attach_controller" 00:16:22.837 },{ 00:16:22.837 "params": { 00:16:22.837 "name": "Nvme4", 00:16:22.837 "trtype": "tcp", 00:16:22.837 "traddr": "10.0.0.2", 00:16:22.837 "adrfam": "ipv4", 00:16:22.837 "trsvcid": "4420", 00:16:22.837 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:22.837 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:22.837 "hdgst": false, 00:16:22.837 "ddgst": false 00:16:22.837 }, 00:16:22.837 "method": "bdev_nvme_attach_controller" 00:16:22.837 },{ 00:16:22.837 "params": { 00:16:22.837 "name": "Nvme5", 00:16:22.837 "trtype": "tcp", 00:16:22.837 "traddr": "10.0.0.2", 00:16:22.837 "adrfam": "ipv4", 00:16:22.837 "trsvcid": "4420", 00:16:22.837 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:22.837 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:22.837 "hdgst": false, 00:16:22.837 "ddgst": false 00:16:22.837 }, 00:16:22.837 "method": "bdev_nvme_attach_controller" 00:16:22.837 },{ 00:16:22.837 "params": { 00:16:22.837 "name": "Nvme6", 00:16:22.837 "trtype": "tcp", 00:16:22.837 "traddr": "10.0.0.2", 00:16:22.837 "adrfam": "ipv4", 00:16:22.837 "trsvcid": "4420", 00:16:22.837 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:22.837 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:22.837 "hdgst": false, 00:16:22.837 "ddgst": false 00:16:22.837 }, 00:16:22.837 "method": "bdev_nvme_attach_controller" 00:16:22.837 },{ 00:16:22.837 "params": { 00:16:22.837 "name": "Nvme7", 00:16:22.837 "trtype": "tcp", 00:16:22.837 "traddr": "10.0.0.2", 00:16:22.837 "adrfam": "ipv4", 00:16:22.837 "trsvcid": "4420", 00:16:22.837 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:22.837 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:22.837 "hdgst": false, 00:16:22.837 "ddgst": false 00:16:22.837 }, 00:16:22.837 "method": "bdev_nvme_attach_controller" 00:16:22.837 },{ 00:16:22.837 "params": { 00:16:22.837 "name": "Nvme8", 00:16:22.838 "trtype": "tcp", 00:16:22.838 "traddr": "10.0.0.2", 00:16:22.838 "adrfam": "ipv4", 00:16:22.838 "trsvcid": "4420", 00:16:22.838 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:22.838 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:22.838 "hdgst": false, 00:16:22.838 "ddgst": false 00:16:22.838 }, 00:16:22.838 "method": "bdev_nvme_attach_controller" 00:16:22.838 },{ 00:16:22.838 "params": { 00:16:22.838 "name": "Nvme9", 00:16:22.838 "trtype": "tcp", 00:16:22.838 "traddr": "10.0.0.2", 00:16:22.838 "adrfam": "ipv4", 00:16:22.838 "trsvcid": "4420", 00:16:22.838 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:22.838 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:22.838 "hdgst": false, 00:16:22.838 "ddgst": false 00:16:22.838 }, 00:16:22.838 "method": "bdev_nvme_attach_controller" 00:16:22.838 },{ 00:16:22.838 "params": { 00:16:22.838 "name": "Nvme10", 00:16:22.838 "trtype": "tcp", 00:16:22.838 "traddr": "10.0.0.2", 00:16:22.838 "adrfam": "ipv4", 00:16:22.838 "trsvcid": "4420", 00:16:22.838 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:22.838 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:22.838 "hdgst": false, 00:16:22.838 "ddgst": false 00:16:22.838 }, 00:16:22.838 "method": "bdev_nvme_attach_controller" 00:16:22.838 }' 00:16:22.838 [2024-04-18 13:45:25.484419] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:22.838 [2024-04-18 13:45:25.484535] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:22.838 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.838 [2024-04-18 13:45:25.549562] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.095 [2024-04-18 13:45:25.659009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.997 13:45:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:24.997 13:45:27 -- common/autotest_common.sh@850 -- # return 0 00:16:24.997 13:45:27 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:24.997 13:45:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.997 13:45:27 -- common/autotest_common.sh@10 -- # set +x 00:16:24.997 13:45:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.997 13:45:27 -- target/shutdown.sh@83 -- # kill -9 2626064 00:16:24.997 13:45:27 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:16:24.997 13:45:27 -- target/shutdown.sh@87 -- # sleep 1 00:16:25.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2626064 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:16:25.936 13:45:28 -- target/shutdown.sh@88 -- # kill -0 2625967 00:16:25.936 13:45:28 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:25.936 13:45:28 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:25.936 13:45:28 -- nvmf/common.sh@521 -- # config=() 00:16:25.936 13:45:28 -- nvmf/common.sh@521 -- # local subsystem config 00:16:25.936 13:45:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.936 13:45:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.936 { 00:16:25.936 "params": { 00:16:25.936 "name": "Nvme$subsystem", 00:16:25.936 "trtype": "$TEST_TRANSPORT", 00:16:25.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.936 "adrfam": "ipv4", 00:16:25.936 "trsvcid": "$NVMF_PORT", 00:16:25.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.936 "hdgst": ${hdgst:-false}, 00:16:25.936 "ddgst": ${ddgst:-false} 00:16:25.936 }, 00:16:25.936 "method": "bdev_nvme_attach_controller" 00:16:25.936 } 00:16:25.936 EOF 00:16:25.936 )") 00:16:25.936 13:45:28 -- nvmf/common.sh@543 -- # cat 00:16:25.936 13:45:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.936 13:45:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.936 { 00:16:25.936 "params": { 00:16:25.936 "name": "Nvme$subsystem", 00:16:25.936 "trtype": "$TEST_TRANSPORT", 00:16:25.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.936 "adrfam": "ipv4", 00:16:25.936 "trsvcid": "$NVMF_PORT", 00:16:25.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.936 "hdgst": ${hdgst:-false}, 00:16:25.936 "ddgst": ${ddgst:-false} 00:16:25.936 }, 00:16:25.936 "method": "bdev_nvme_attach_controller" 00:16:25.936 } 00:16:25.936 EOF 00:16:25.936 )") 00:16:25.936 13:45:28 -- nvmf/common.sh@543 -- # cat 00:16:25.936 13:45:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.936 13:45:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.936 { 00:16:25.936 "params": { 00:16:25.936 "name": "Nvme$subsystem", 00:16:25.936 "trtype": "$TEST_TRANSPORT", 00:16:25.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.936 "adrfam": "ipv4", 00:16:25.936 "trsvcid": "$NVMF_PORT", 00:16:25.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.936 "hdgst": ${hdgst:-false}, 00:16:25.936 "ddgst": ${ddgst:-false} 00:16:25.936 }, 00:16:25.936 "method": "bdev_nvme_attach_controller" 00:16:25.936 } 00:16:25.936 EOF 00:16:25.936 )") 00:16:25.936 13:45:28 -- nvmf/common.sh@543 -- # cat 00:16:25.936 13:45:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.936 13:45:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.936 { 00:16:25.936 "params": { 00:16:25.936 "name": "Nvme$subsystem", 00:16:25.936 "trtype": "$TEST_TRANSPORT", 00:16:25.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.936 "adrfam": "ipv4", 00:16:25.936 "trsvcid": "$NVMF_PORT", 00:16:25.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.936 "hdgst": ${hdgst:-false}, 00:16:25.936 "ddgst": ${ddgst:-false} 00:16:25.936 }, 00:16:25.936 "method": "bdev_nvme_attach_controller" 00:16:25.936 } 00:16:25.936 EOF 00:16:25.936 )") 00:16:25.936 13:45:28 -- nvmf/common.sh@543 -- # cat 00:16:25.936 13:45:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.936 13:45:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.936 { 00:16:25.936 "params": { 00:16:25.936 "name": "Nvme$subsystem", 00:16:25.936 "trtype": "$TEST_TRANSPORT", 00:16:25.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.936 "adrfam": "ipv4", 00:16:25.936 "trsvcid": "$NVMF_PORT", 00:16:25.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.936 "hdgst": ${hdgst:-false}, 00:16:25.936 "ddgst": ${ddgst:-false} 00:16:25.936 }, 00:16:25.936 "method": "bdev_nvme_attach_controller" 00:16:25.936 } 00:16:25.936 EOF 00:16:25.936 )") 00:16:25.936 13:45:28 -- nvmf/common.sh@543 -- # cat 00:16:25.936 13:45:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.936 13:45:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.936 { 00:16:25.936 "params": { 00:16:25.936 "name": "Nvme$subsystem", 00:16:25.936 "trtype": "$TEST_TRANSPORT", 00:16:25.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.936 "adrfam": "ipv4", 00:16:25.936 "trsvcid": "$NVMF_PORT", 00:16:25.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.936 "hdgst": ${hdgst:-false}, 00:16:25.936 "ddgst": ${ddgst:-false} 00:16:25.936 }, 00:16:25.936 "method": "bdev_nvme_attach_controller" 00:16:25.936 } 00:16:25.936 EOF 00:16:25.936 )") 00:16:25.936 13:45:28 -- nvmf/common.sh@543 -- # cat 00:16:25.937 13:45:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.937 13:45:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.937 { 00:16:25.937 "params": { 00:16:25.937 "name": "Nvme$subsystem", 00:16:25.937 "trtype": "$TEST_TRANSPORT", 00:16:25.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.937 "adrfam": "ipv4", 00:16:25.937 "trsvcid": "$NVMF_PORT", 00:16:25.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.937 "hdgst": ${hdgst:-false}, 00:16:25.937 "ddgst": ${ddgst:-false} 00:16:25.937 }, 00:16:25.937 "method": "bdev_nvme_attach_controller" 00:16:25.937 } 00:16:25.937 EOF 00:16:25.937 )") 00:16:25.937 13:45:28 -- nvmf/common.sh@543 -- # cat 00:16:25.937 13:45:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.937 13:45:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.937 { 00:16:25.937 "params": { 00:16:25.937 "name": "Nvme$subsystem", 00:16:25.937 "trtype": "$TEST_TRANSPORT", 00:16:25.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.937 "adrfam": "ipv4", 00:16:25.937 "trsvcid": "$NVMF_PORT", 00:16:25.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.937 "hdgst": ${hdgst:-false}, 00:16:25.937 "ddgst": ${ddgst:-false} 00:16:25.937 }, 00:16:25.937 "method": "bdev_nvme_attach_controller" 00:16:25.937 } 00:16:25.937 EOF 00:16:25.937 )") 00:16:25.937 13:45:28 -- nvmf/common.sh@543 -- # cat 00:16:25.937 13:45:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.937 13:45:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.937 { 00:16:25.937 "params": { 00:16:25.937 "name": "Nvme$subsystem", 00:16:25.937 "trtype": "$TEST_TRANSPORT", 00:16:25.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.937 "adrfam": "ipv4", 00:16:25.937 "trsvcid": "$NVMF_PORT", 00:16:25.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.937 "hdgst": ${hdgst:-false}, 00:16:25.937 "ddgst": ${ddgst:-false} 00:16:25.937 }, 00:16:25.937 "method": "bdev_nvme_attach_controller" 00:16:25.937 } 00:16:25.937 EOF 00:16:25.937 )") 00:16:25.937 13:45:28 -- nvmf/common.sh@543 -- # cat 00:16:25.937 13:45:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.937 13:45:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.937 { 00:16:25.937 "params": { 00:16:25.937 "name": "Nvme$subsystem", 00:16:25.937 "trtype": "$TEST_TRANSPORT", 00:16:25.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.937 "adrfam": "ipv4", 00:16:25.937 "trsvcid": "$NVMF_PORT", 00:16:25.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.937 "hdgst": ${hdgst:-false}, 00:16:25.937 "ddgst": ${ddgst:-false} 00:16:25.937 }, 00:16:25.937 "method": "bdev_nvme_attach_controller" 00:16:25.937 } 00:16:25.937 EOF 00:16:25.937 )") 00:16:25.937 13:45:28 -- nvmf/common.sh@543 -- # cat 00:16:25.937 13:45:28 -- nvmf/common.sh@545 -- # jq . 00:16:25.937 13:45:28 -- nvmf/common.sh@546 -- # IFS=, 00:16:25.937 13:45:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:25.937 "params": { 00:16:25.937 "name": "Nvme1", 00:16:25.937 "trtype": "tcp", 00:16:25.937 "traddr": "10.0.0.2", 00:16:25.937 "adrfam": "ipv4", 00:16:25.937 "trsvcid": "4420", 00:16:25.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:25.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:25.937 "hdgst": false, 00:16:25.937 "ddgst": false 00:16:25.937 }, 00:16:25.937 "method": "bdev_nvme_attach_controller" 00:16:25.937 },{ 00:16:25.937 "params": { 00:16:25.937 "name": "Nvme2", 00:16:25.937 "trtype": "tcp", 00:16:25.937 "traddr": "10.0.0.2", 00:16:25.937 "adrfam": "ipv4", 00:16:25.937 "trsvcid": "4420", 00:16:25.937 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:25.937 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:25.937 "hdgst": false, 00:16:25.937 "ddgst": false 00:16:25.937 }, 00:16:25.937 "method": "bdev_nvme_attach_controller" 00:16:25.937 },{ 00:16:25.937 "params": { 00:16:25.937 "name": "Nvme3", 00:16:25.937 "trtype": "tcp", 00:16:25.937 "traddr": "10.0.0.2", 00:16:25.937 "adrfam": "ipv4", 00:16:25.937 "trsvcid": "4420", 00:16:25.937 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:25.937 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:25.937 "hdgst": false, 00:16:25.937 "ddgst": false 00:16:25.937 }, 00:16:25.937 "method": "bdev_nvme_attach_controller" 00:16:25.937 },{ 00:16:25.937 "params": { 00:16:25.937 "name": "Nvme4", 00:16:25.937 "trtype": "tcp", 00:16:25.937 "traddr": "10.0.0.2", 00:16:25.937 "adrfam": "ipv4", 00:16:25.937 "trsvcid": "4420", 00:16:25.937 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:25.937 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:25.937 "hdgst": false, 00:16:25.937 "ddgst": false 00:16:25.937 }, 00:16:25.937 "method": "bdev_nvme_attach_controller" 00:16:25.937 },{ 00:16:25.937 "params": { 00:16:25.937 "name": "Nvme5", 00:16:25.937 "trtype": "tcp", 00:16:25.937 "traddr": "10.0.0.2", 00:16:25.937 "adrfam": "ipv4", 00:16:25.937 "trsvcid": "4420", 00:16:25.937 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:25.937 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:25.937 "hdgst": false, 00:16:25.937 "ddgst": false 00:16:25.937 }, 00:16:25.937 "method": "bdev_nvme_attach_controller" 00:16:25.937 },{ 00:16:25.937 "params": { 00:16:25.937 "name": "Nvme6", 00:16:25.937 "trtype": "tcp", 00:16:25.937 "traddr": "10.0.0.2", 00:16:25.937 "adrfam": "ipv4", 00:16:25.937 "trsvcid": "4420", 00:16:25.937 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:25.937 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:25.937 "hdgst": false, 00:16:25.937 "ddgst": false 00:16:25.937 }, 00:16:25.937 "method": "bdev_nvme_attach_controller" 00:16:25.937 },{ 00:16:25.937 "params": { 00:16:25.937 "name": "Nvme7", 00:16:25.937 "trtype": "tcp", 00:16:25.937 "traddr": "10.0.0.2", 00:16:25.937 "adrfam": "ipv4", 00:16:25.937 "trsvcid": "4420", 00:16:25.937 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:25.937 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:25.937 "hdgst": false, 00:16:25.937 "ddgst": false 00:16:25.937 }, 00:16:25.937 "method": "bdev_nvme_attach_controller" 00:16:25.937 },{ 00:16:25.937 "params": { 00:16:25.937 "name": "Nvme8", 00:16:25.937 "trtype": "tcp", 00:16:25.937 "traddr": "10.0.0.2", 00:16:25.937 "adrfam": "ipv4", 00:16:25.937 "trsvcid": "4420", 00:16:25.937 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:25.937 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:25.937 "hdgst": false, 00:16:25.937 "ddgst": false 00:16:25.937 }, 00:16:25.937 "method": "bdev_nvme_attach_controller" 00:16:25.937 },{ 00:16:25.937 "params": { 00:16:25.937 "name": "Nvme9", 00:16:25.937 "trtype": "tcp", 00:16:25.937 "traddr": "10.0.0.2", 00:16:25.937 "adrfam": "ipv4", 00:16:25.937 "trsvcid": "4420", 00:16:25.937 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:25.937 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:25.937 "hdgst": false, 00:16:25.937 "ddgst": false 00:16:25.937 }, 00:16:25.937 "method": "bdev_nvme_attach_controller" 00:16:25.937 },{ 00:16:25.937 "params": { 00:16:25.937 "name": "Nvme10", 00:16:25.937 "trtype": "tcp", 00:16:25.937 "traddr": "10.0.0.2", 00:16:25.937 "adrfam": "ipv4", 00:16:25.937 "trsvcid": "4420", 00:16:25.937 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:25.937 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:25.937 "hdgst": false, 00:16:25.937 "ddgst": false 00:16:25.937 }, 00:16:25.937 "method": "bdev_nvme_attach_controller" 00:16:25.937 }' 00:16:25.937 [2024-04-18 13:45:28.498559] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:25.937 [2024-04-18 13:45:28.498657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2626449 ] 00:16:25.937 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.937 [2024-04-18 13:45:28.563809] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.937 [2024-04-18 13:45:28.677227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.839 Running I/O for 1 seconds... 00:16:28.808 00:16:28.808 Latency(us) 00:16:28.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.808 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.808 Verification LBA range: start 0x0 length 0x400 00:16:28.808 Nvme1n1 : 1.15 226.85 14.18 0.00 0.00 278353.98 6068.15 260978.92 00:16:28.808 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.808 Verification LBA range: start 0x0 length 0x400 00:16:28.808 Nvme2n1 : 1.16 221.13 13.82 0.00 0.00 280583.40 19126.80 264085.81 00:16:28.808 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.808 Verification LBA range: start 0x0 length 0x400 00:16:28.808 Nvme3n1 : 1.13 226.60 14.16 0.00 0.00 270034.87 18835.53 256318.58 00:16:28.808 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.808 Verification LBA range: start 0x0 length 0x400 00:16:28.808 Nvme4n1 : 1.10 237.61 14.85 0.00 0.00 249857.70 10388.67 259425.47 00:16:28.808 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.808 Verification LBA range: start 0x0 length 0x400 00:16:28.808 Nvme5n1 : 1.17 219.54 13.72 0.00 0.00 270351.36 21068.61 264085.81 00:16:28.808 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.808 Verification LBA range: start 0x0 length 0x400 00:16:28.808 Nvme6n1 : 1.15 239.45 14.97 0.00 0.00 236245.59 16019.91 254765.13 00:16:28.808 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.808 Verification LBA range: start 0x0 length 0x400 00:16:28.808 Nvme7n1 : 1.15 225.15 14.07 0.00 0.00 254239.35 3665.16 246997.90 00:16:28.808 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.808 Verification LBA range: start 0x0 length 0x400 00:16:28.808 Nvme8n1 : 1.18 271.29 16.96 0.00 0.00 208301.32 21165.70 260978.92 00:16:28.808 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.808 Verification LBA range: start 0x0 length 0x400 00:16:28.808 Nvme9n1 : 1.17 217.90 13.62 0.00 0.00 254808.94 21262.79 278066.82 00:16:28.808 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.808 Verification LBA range: start 0x0 length 0x400 00:16:28.808 Nvme10n1 : 1.17 222.98 13.94 0.00 0.00 244099.83 761.55 288940.94 00:16:28.808 =================================================================================================================== 00:16:28.808 Total : 2308.50 144.28 0.00 0.00 253446.24 761.55 288940.94 00:16:29.067 13:45:31 -- target/shutdown.sh@94 -- # stoptarget 00:16:29.067 13:45:31 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:29.067 13:45:31 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:29.068 13:45:31 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:29.068 13:45:31 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:29.068 13:45:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:29.068 13:45:31 -- nvmf/common.sh@117 -- # sync 00:16:29.068 13:45:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:29.068 13:45:31 -- nvmf/common.sh@120 -- # set +e 00:16:29.068 13:45:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:29.068 13:45:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:29.068 rmmod nvme_tcp 00:16:29.068 rmmod nvme_fabrics 00:16:29.068 rmmod nvme_keyring 00:16:29.068 13:45:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:29.068 13:45:31 -- nvmf/common.sh@124 -- # set -e 00:16:29.068 13:45:31 -- nvmf/common.sh@125 -- # return 0 00:16:29.068 13:45:31 -- nvmf/common.sh@478 -- # '[' -n 2625967 ']' 00:16:29.068 13:45:31 -- nvmf/common.sh@479 -- # killprocess 2625967 00:16:29.068 13:45:31 -- common/autotest_common.sh@936 -- # '[' -z 2625967 ']' 00:16:29.068 13:45:31 -- common/autotest_common.sh@940 -- # kill -0 2625967 00:16:29.068 13:45:31 -- common/autotest_common.sh@941 -- # uname 00:16:29.068 13:45:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:29.068 13:45:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2625967 00:16:29.068 13:45:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:29.068 13:45:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:29.068 13:45:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2625967' 00:16:29.068 killing process with pid 2625967 00:16:29.068 13:45:31 -- common/autotest_common.sh@955 -- # kill 2625967 00:16:29.068 13:45:31 -- common/autotest_common.sh@960 -- # wait 2625967 00:16:29.633 13:45:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:29.634 13:45:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:29.634 13:45:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:29.634 13:45:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.634 13:45:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:29.634 13:45:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.634 13:45:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.634 13:45:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.539 13:45:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:31.539 00:16:31.539 real 0m11.893s 00:16:31.539 user 0m34.960s 00:16:31.539 sys 0m3.231s 00:16:31.539 13:45:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:31.540 13:45:34 -- common/autotest_common.sh@10 -- # set +x 00:16:31.540 ************************************ 00:16:31.540 END TEST nvmf_shutdown_tc1 00:16:31.540 ************************************ 00:16:31.798 13:45:34 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:16:31.798 13:45:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:31.798 13:45:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:31.798 13:45:34 -- common/autotest_common.sh@10 -- # set +x 00:16:31.798 ************************************ 00:16:31.798 START TEST nvmf_shutdown_tc2 00:16:31.798 ************************************ 00:16:31.799 13:45:34 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:16:31.799 13:45:34 -- target/shutdown.sh@99 -- # starttarget 00:16:31.799 13:45:34 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:31.799 13:45:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:31.799 13:45:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.799 13:45:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:31.799 13:45:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:31.799 13:45:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:31.799 13:45:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.799 13:45:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.799 13:45:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.799 13:45:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:31.799 13:45:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:31.799 13:45:34 -- common/autotest_common.sh@10 -- # set +x 00:16:31.799 13:45:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:31.799 13:45:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:31.799 13:45:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:31.799 13:45:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:31.799 13:45:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:31.799 13:45:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:31.799 13:45:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:31.799 13:45:34 -- nvmf/common.sh@295 -- # net_devs=() 00:16:31.799 13:45:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:31.799 13:45:34 -- nvmf/common.sh@296 -- # e810=() 00:16:31.799 13:45:34 -- nvmf/common.sh@296 -- # local -ga e810 00:16:31.799 13:45:34 -- nvmf/common.sh@297 -- # x722=() 00:16:31.799 13:45:34 -- nvmf/common.sh@297 -- # local -ga x722 00:16:31.799 13:45:34 -- nvmf/common.sh@298 -- # mlx=() 00:16:31.799 13:45:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:31.799 13:45:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:31.799 13:45:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:31.799 13:45:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:31.799 13:45:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:31.799 13:45:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:31.799 13:45:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:31.799 13:45:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:31.799 13:45:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:31.799 13:45:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:31.799 13:45:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:31.799 13:45:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:31.799 13:45:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:31.799 13:45:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:31.799 13:45:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:31.799 13:45:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:31.799 13:45:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:31.799 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:31.799 13:45:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:31.799 13:45:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:31.799 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:31.799 13:45:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:31.799 13:45:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:31.799 13:45:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.799 13:45:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:31.799 13:45:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.799 13:45:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:31.799 Found net devices under 0000:84:00.0: cvl_0_0 00:16:31.799 13:45:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.799 13:45:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:31.799 13:45:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.799 13:45:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:31.799 13:45:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.799 13:45:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:31.799 Found net devices under 0000:84:00.1: cvl_0_1 00:16:31.799 13:45:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.799 13:45:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:31.799 13:45:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:31.799 13:45:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:31.799 13:45:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:31.799 13:45:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.799 13:45:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.799 13:45:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:31.799 13:45:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:31.799 13:45:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:31.799 13:45:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:31.799 13:45:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:31.799 13:45:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:31.799 13:45:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.799 13:45:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:31.799 13:45:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:31.799 13:45:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:31.799 13:45:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:31.799 13:45:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:31.799 13:45:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:31.799 13:45:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:31.799 13:45:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:31.799 13:45:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:31.799 13:45:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:31.799 13:45:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:31.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:16:31.799 00:16:31.799 --- 10.0.0.2 ping statistics --- 00:16:31.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.799 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:16:32.057 13:45:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:32.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:32.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:16:32.057 00:16:32.057 --- 10.0.0.1 ping statistics --- 00:16:32.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.057 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:16:32.057 13:45:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:32.057 13:45:34 -- nvmf/common.sh@411 -- # return 0 00:16:32.057 13:45:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:32.057 13:45:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:32.057 13:45:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:32.057 13:45:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:32.057 13:45:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:32.057 13:45:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:32.057 13:45:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:32.057 13:45:34 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:32.057 13:45:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:32.058 13:45:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:32.058 13:45:34 -- common/autotest_common.sh@10 -- # set +x 00:16:32.058 13:45:34 -- nvmf/common.sh@470 -- # nvmfpid=2627343 00:16:32.058 13:45:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:32.058 13:45:34 -- nvmf/common.sh@471 -- # waitforlisten 2627343 00:16:32.058 13:45:34 -- common/autotest_common.sh@817 -- # '[' -z 2627343 ']' 00:16:32.058 13:45:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.058 13:45:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:32.058 13:45:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.058 13:45:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:32.058 13:45:34 -- common/autotest_common.sh@10 -- # set +x 00:16:32.058 [2024-04-18 13:45:34.678379] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:32.058 [2024-04-18 13:45:34.678470] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.058 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.058 [2024-04-18 13:45:34.752892] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:32.318 [2024-04-18 13:45:34.872925] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.318 [2024-04-18 13:45:34.872983] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.318 [2024-04-18 13:45:34.873000] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:32.318 [2024-04-18 13:45:34.873013] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:32.318 [2024-04-18 13:45:34.873026] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.318 [2024-04-18 13:45:34.873120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.318 [2024-04-18 13:45:34.873145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.318 [2024-04-18 13:45:34.873209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:32.318 [2024-04-18 13:45:34.873214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.318 13:45:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:32.318 13:45:35 -- common/autotest_common.sh@850 -- # return 0 00:16:32.318 13:45:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:32.318 13:45:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:32.318 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:16:32.318 13:45:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.318 13:45:35 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:32.318 13:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.318 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:16:32.318 [2024-04-18 13:45:35.035000] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.318 13:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.318 13:45:35 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:32.318 13:45:35 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:32.318 13:45:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:32.318 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:16:32.318 13:45:35 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:32.318 13:45:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:32.318 13:45:35 -- target/shutdown.sh@28 -- # cat 00:16:32.318 13:45:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:32.318 13:45:35 -- target/shutdown.sh@28 -- # cat 00:16:32.318 13:45:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:32.318 13:45:35 -- target/shutdown.sh@28 -- # cat 00:16:32.318 13:45:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:32.318 13:45:35 -- target/shutdown.sh@28 -- # cat 00:16:32.318 13:45:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:32.318 13:45:35 -- target/shutdown.sh@28 -- # cat 00:16:32.318 13:45:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:32.318 13:45:35 -- target/shutdown.sh@28 -- # cat 00:16:32.318 13:45:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:32.318 13:45:35 -- target/shutdown.sh@28 -- # cat 00:16:32.318 13:45:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:32.318 13:45:35 -- target/shutdown.sh@28 -- # cat 00:16:32.318 13:45:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:32.318 13:45:35 -- target/shutdown.sh@28 -- # cat 00:16:32.318 13:45:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:32.318 13:45:35 -- target/shutdown.sh@28 -- # cat 00:16:32.318 13:45:35 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:32.318 13:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.318 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:16:32.318 Malloc1 00:16:32.318 [2024-04-18 13:45:35.124872] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.577 Malloc2 00:16:32.577 Malloc3 00:16:32.577 Malloc4 00:16:32.577 Malloc5 00:16:32.577 Malloc6 00:16:32.836 Malloc7 00:16:32.836 Malloc8 00:16:32.836 Malloc9 00:16:32.836 Malloc10 00:16:32.836 13:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.836 13:45:35 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:32.836 13:45:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:32.836 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:16:32.836 13:45:35 -- target/shutdown.sh@103 -- # perfpid=2627408 00:16:32.836 13:45:35 -- target/shutdown.sh@104 -- # waitforlisten 2627408 /var/tmp/bdevperf.sock 00:16:32.836 13:45:35 -- common/autotest_common.sh@817 -- # '[' -z 2627408 ']' 00:16:32.836 13:45:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:32.836 13:45:35 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:32.836 13:45:35 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:32.836 13:45:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:32.836 13:45:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:32.836 13:45:35 -- nvmf/common.sh@521 -- # config=() 00:16:32.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:32.836 13:45:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:32.836 13:45:35 -- nvmf/common.sh@521 -- # local subsystem config 00:16:32.836 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:16:32.836 13:45:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:32.836 13:45:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:32.836 { 00:16:32.836 "params": { 00:16:32.836 "name": "Nvme$subsystem", 00:16:32.836 "trtype": "$TEST_TRANSPORT", 00:16:32.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.836 "adrfam": "ipv4", 00:16:32.836 "trsvcid": "$NVMF_PORT", 00:16:32.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.837 "hdgst": ${hdgst:-false}, 00:16:32.837 "ddgst": ${ddgst:-false} 00:16:32.837 }, 00:16:32.837 "method": "bdev_nvme_attach_controller" 00:16:32.837 } 00:16:32.837 EOF 00:16:32.837 )") 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # cat 00:16:32.837 13:45:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:32.837 { 00:16:32.837 "params": { 00:16:32.837 "name": "Nvme$subsystem", 00:16:32.837 "trtype": "$TEST_TRANSPORT", 00:16:32.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.837 "adrfam": "ipv4", 00:16:32.837 "trsvcid": "$NVMF_PORT", 00:16:32.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.837 "hdgst": ${hdgst:-false}, 00:16:32.837 "ddgst": ${ddgst:-false} 00:16:32.837 }, 00:16:32.837 "method": "bdev_nvme_attach_controller" 00:16:32.837 } 00:16:32.837 EOF 00:16:32.837 )") 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # cat 00:16:32.837 13:45:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:32.837 { 00:16:32.837 "params": { 00:16:32.837 "name": "Nvme$subsystem", 00:16:32.837 "trtype": "$TEST_TRANSPORT", 00:16:32.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.837 "adrfam": "ipv4", 00:16:32.837 "trsvcid": "$NVMF_PORT", 00:16:32.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.837 "hdgst": ${hdgst:-false}, 00:16:32.837 "ddgst": ${ddgst:-false} 00:16:32.837 }, 00:16:32.837 "method": "bdev_nvme_attach_controller" 00:16:32.837 } 00:16:32.837 EOF 00:16:32.837 )") 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # cat 00:16:32.837 13:45:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:32.837 { 00:16:32.837 "params": { 00:16:32.837 "name": "Nvme$subsystem", 00:16:32.837 "trtype": "$TEST_TRANSPORT", 00:16:32.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.837 "adrfam": "ipv4", 00:16:32.837 "trsvcid": "$NVMF_PORT", 00:16:32.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.837 "hdgst": ${hdgst:-false}, 00:16:32.837 "ddgst": ${ddgst:-false} 00:16:32.837 }, 00:16:32.837 "method": "bdev_nvme_attach_controller" 00:16:32.837 } 00:16:32.837 EOF 00:16:32.837 )") 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # cat 00:16:32.837 13:45:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:32.837 { 00:16:32.837 "params": { 00:16:32.837 "name": "Nvme$subsystem", 00:16:32.837 "trtype": "$TEST_TRANSPORT", 00:16:32.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.837 "adrfam": "ipv4", 00:16:32.837 "trsvcid": "$NVMF_PORT", 00:16:32.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.837 "hdgst": ${hdgst:-false}, 00:16:32.837 "ddgst": ${ddgst:-false} 00:16:32.837 }, 00:16:32.837 "method": "bdev_nvme_attach_controller" 00:16:32.837 } 00:16:32.837 EOF 00:16:32.837 )") 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # cat 00:16:32.837 13:45:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:32.837 { 00:16:32.837 "params": { 00:16:32.837 "name": "Nvme$subsystem", 00:16:32.837 "trtype": "$TEST_TRANSPORT", 00:16:32.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.837 "adrfam": "ipv4", 00:16:32.837 "trsvcid": "$NVMF_PORT", 00:16:32.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.837 "hdgst": ${hdgst:-false}, 00:16:32.837 "ddgst": ${ddgst:-false} 00:16:32.837 }, 00:16:32.837 "method": "bdev_nvme_attach_controller" 00:16:32.837 } 00:16:32.837 EOF 00:16:32.837 )") 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # cat 00:16:32.837 13:45:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:32.837 { 00:16:32.837 "params": { 00:16:32.837 "name": "Nvme$subsystem", 00:16:32.837 "trtype": "$TEST_TRANSPORT", 00:16:32.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.837 "adrfam": "ipv4", 00:16:32.837 "trsvcid": "$NVMF_PORT", 00:16:32.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.837 "hdgst": ${hdgst:-false}, 00:16:32.837 "ddgst": ${ddgst:-false} 00:16:32.837 }, 00:16:32.837 "method": "bdev_nvme_attach_controller" 00:16:32.837 } 00:16:32.837 EOF 00:16:32.837 )") 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # cat 00:16:32.837 13:45:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:32.837 { 00:16:32.837 "params": { 00:16:32.837 "name": "Nvme$subsystem", 00:16:32.837 "trtype": "$TEST_TRANSPORT", 00:16:32.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.837 "adrfam": "ipv4", 00:16:32.837 "trsvcid": "$NVMF_PORT", 00:16:32.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.837 "hdgst": ${hdgst:-false}, 00:16:32.837 "ddgst": ${ddgst:-false} 00:16:32.837 }, 00:16:32.837 "method": "bdev_nvme_attach_controller" 00:16:32.837 } 00:16:32.837 EOF 00:16:32.837 )") 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # cat 00:16:32.837 13:45:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:32.837 { 00:16:32.837 "params": { 00:16:32.837 "name": "Nvme$subsystem", 00:16:32.837 "trtype": "$TEST_TRANSPORT", 00:16:32.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.837 "adrfam": "ipv4", 00:16:32.837 "trsvcid": "$NVMF_PORT", 00:16:32.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.837 "hdgst": ${hdgst:-false}, 00:16:32.837 "ddgst": ${ddgst:-false} 00:16:32.837 }, 00:16:32.837 "method": "bdev_nvme_attach_controller" 00:16:32.837 } 00:16:32.837 EOF 00:16:32.837 )") 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # cat 00:16:32.837 13:45:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:32.837 { 00:16:32.837 "params": { 00:16:32.837 "name": "Nvme$subsystem", 00:16:32.837 "trtype": "$TEST_TRANSPORT", 00:16:32.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.837 "adrfam": "ipv4", 00:16:32.837 "trsvcid": "$NVMF_PORT", 00:16:32.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.837 "hdgst": ${hdgst:-false}, 00:16:32.837 "ddgst": ${ddgst:-false} 00:16:32.837 }, 00:16:32.837 "method": "bdev_nvme_attach_controller" 00:16:32.837 } 00:16:32.837 EOF 00:16:32.837 )") 00:16:32.837 13:45:35 -- nvmf/common.sh@543 -- # cat 00:16:32.837 13:45:35 -- nvmf/common.sh@545 -- # jq . 00:16:32.837 13:45:35 -- nvmf/common.sh@546 -- # IFS=, 00:16:32.837 13:45:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:32.837 "params": { 00:16:32.837 "name": "Nvme1", 00:16:32.837 "trtype": "tcp", 00:16:32.837 "traddr": "10.0.0.2", 00:16:32.837 "adrfam": "ipv4", 00:16:32.837 "trsvcid": "4420", 00:16:32.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:32.838 "hdgst": false, 00:16:32.838 "ddgst": false 00:16:32.838 }, 00:16:32.838 "method": "bdev_nvme_attach_controller" 00:16:32.838 },{ 00:16:32.838 "params": { 00:16:32.838 "name": "Nvme2", 00:16:32.838 "trtype": "tcp", 00:16:32.838 "traddr": "10.0.0.2", 00:16:32.838 "adrfam": "ipv4", 00:16:32.838 "trsvcid": "4420", 00:16:32.838 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:32.838 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:32.838 "hdgst": false, 00:16:32.838 "ddgst": false 00:16:32.838 }, 00:16:32.838 "method": "bdev_nvme_attach_controller" 00:16:32.838 },{ 00:16:32.838 "params": { 00:16:32.838 "name": "Nvme3", 00:16:32.838 "trtype": "tcp", 00:16:32.838 "traddr": "10.0.0.2", 00:16:32.838 "adrfam": "ipv4", 00:16:32.838 "trsvcid": "4420", 00:16:32.838 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:32.838 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:32.838 "hdgst": false, 00:16:32.838 "ddgst": false 00:16:32.838 }, 00:16:32.838 "method": "bdev_nvme_attach_controller" 00:16:32.838 },{ 00:16:32.838 "params": { 00:16:32.838 "name": "Nvme4", 00:16:32.838 "trtype": "tcp", 00:16:32.838 "traddr": "10.0.0.2", 00:16:32.838 "adrfam": "ipv4", 00:16:32.838 "trsvcid": "4420", 00:16:32.838 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:32.838 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:32.838 "hdgst": false, 00:16:32.838 "ddgst": false 00:16:32.838 }, 00:16:32.838 "method": "bdev_nvme_attach_controller" 00:16:32.838 },{ 00:16:32.838 "params": { 00:16:32.838 "name": "Nvme5", 00:16:32.838 "trtype": "tcp", 00:16:32.838 "traddr": "10.0.0.2", 00:16:32.838 "adrfam": "ipv4", 00:16:32.838 "trsvcid": "4420", 00:16:32.838 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:32.838 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:32.838 "hdgst": false, 00:16:32.838 "ddgst": false 00:16:32.838 }, 00:16:32.838 "method": "bdev_nvme_attach_controller" 00:16:32.838 },{ 00:16:32.838 "params": { 00:16:32.838 "name": "Nvme6", 00:16:32.838 "trtype": "tcp", 00:16:32.838 "traddr": "10.0.0.2", 00:16:32.838 "adrfam": "ipv4", 00:16:32.838 "trsvcid": "4420", 00:16:32.838 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:32.838 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:32.838 "hdgst": false, 00:16:32.838 "ddgst": false 00:16:32.838 }, 00:16:32.838 "method": "bdev_nvme_attach_controller" 00:16:32.838 },{ 00:16:32.838 "params": { 00:16:32.838 "name": "Nvme7", 00:16:32.838 "trtype": "tcp", 00:16:32.838 "traddr": "10.0.0.2", 00:16:32.838 "adrfam": "ipv4", 00:16:32.838 "trsvcid": "4420", 00:16:32.838 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:32.838 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:32.838 "hdgst": false, 00:16:32.838 "ddgst": false 00:16:32.838 }, 00:16:32.838 "method": "bdev_nvme_attach_controller" 00:16:32.838 },{ 00:16:32.838 "params": { 00:16:32.838 "name": "Nvme8", 00:16:32.838 "trtype": "tcp", 00:16:32.838 "traddr": "10.0.0.2", 00:16:32.838 "adrfam": "ipv4", 00:16:32.838 "trsvcid": "4420", 00:16:32.838 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:32.838 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:32.838 "hdgst": false, 00:16:32.838 "ddgst": false 00:16:32.838 }, 00:16:32.838 "method": "bdev_nvme_attach_controller" 00:16:32.838 },{ 00:16:32.838 "params": { 00:16:32.838 "name": "Nvme9", 00:16:32.838 "trtype": "tcp", 00:16:32.838 "traddr": "10.0.0.2", 00:16:32.838 "adrfam": "ipv4", 00:16:32.838 "trsvcid": "4420", 00:16:32.838 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:32.838 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:32.838 "hdgst": false, 00:16:32.838 "ddgst": false 00:16:32.838 }, 00:16:32.838 "method": "bdev_nvme_attach_controller" 00:16:32.838 },{ 00:16:32.838 "params": { 00:16:32.838 "name": "Nvme10", 00:16:32.838 "trtype": "tcp", 00:16:32.838 "traddr": "10.0.0.2", 00:16:32.838 "adrfam": "ipv4", 00:16:32.838 "trsvcid": "4420", 00:16:32.838 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:32.838 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:32.838 "hdgst": false, 00:16:32.838 "ddgst": false 00:16:32.838 }, 00:16:32.838 "method": "bdev_nvme_attach_controller" 00:16:32.838 }' 00:16:32.838 [2024-04-18 13:45:35.629503] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:32.838 [2024-04-18 13:45:35.629592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2627408 ] 00:16:33.099 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.099 [2024-04-18 13:45:35.695524] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.099 [2024-04-18 13:45:35.805411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.007 Running I/O for 10 seconds... 00:16:35.007 13:45:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:35.007 13:45:37 -- common/autotest_common.sh@850 -- # return 0 00:16:35.007 13:45:37 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:35.007 13:45:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.007 13:45:37 -- common/autotest_common.sh@10 -- # set +x 00:16:35.007 13:45:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.007 13:45:37 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:35.007 13:45:37 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:35.007 13:45:37 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:16:35.007 13:45:37 -- target/shutdown.sh@57 -- # local ret=1 00:16:35.007 13:45:37 -- target/shutdown.sh@58 -- # local i 00:16:35.007 13:45:37 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:16:35.007 13:45:37 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:35.007 13:45:37 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:35.007 13:45:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.007 13:45:37 -- common/autotest_common.sh@10 -- # set +x 00:16:35.007 13:45:37 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:35.007 13:45:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.007 13:45:37 -- target/shutdown.sh@60 -- # read_io_count=3 00:16:35.007 13:45:37 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:16:35.007 13:45:37 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:35.267 13:45:37 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:35.267 13:45:37 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:35.267 13:45:37 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:35.267 13:45:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.267 13:45:37 -- common/autotest_common.sh@10 -- # set +x 00:16:35.267 13:45:37 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:35.267 13:45:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.267 13:45:38 -- target/shutdown.sh@60 -- # read_io_count=67 00:16:35.267 13:45:38 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:16:35.267 13:45:38 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:35.525 13:45:38 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:35.525 13:45:38 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:35.525 13:45:38 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:35.525 13:45:38 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:35.525 13:45:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.525 13:45:38 -- common/autotest_common.sh@10 -- # set +x 00:16:35.525 13:45:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.783 13:45:38 -- target/shutdown.sh@60 -- # read_io_count=131 00:16:35.783 13:45:38 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:16:35.784 13:45:38 -- target/shutdown.sh@64 -- # ret=0 00:16:35.784 13:45:38 -- target/shutdown.sh@65 -- # break 00:16:35.784 13:45:38 -- target/shutdown.sh@69 -- # return 0 00:16:35.784 13:45:38 -- target/shutdown.sh@110 -- # killprocess 2627408 00:16:35.784 13:45:38 -- common/autotest_common.sh@936 -- # '[' -z 2627408 ']' 00:16:35.784 13:45:38 -- common/autotest_common.sh@940 -- # kill -0 2627408 00:16:35.784 13:45:38 -- common/autotest_common.sh@941 -- # uname 00:16:35.784 13:45:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:35.784 13:45:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2627408 00:16:35.784 13:45:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:35.784 13:45:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:35.784 13:45:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2627408' 00:16:35.784 killing process with pid 2627408 00:16:35.784 13:45:38 -- common/autotest_common.sh@955 -- # kill 2627408 00:16:35.784 13:45:38 -- common/autotest_common.sh@960 -- # wait 2627408 00:16:35.784 Received shutdown signal, test time was about 0.958617 seconds 00:16:35.784 00:16:35.784 Latency(us) 00:16:35.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.784 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:35.784 Verification LBA range: start 0x0 length 0x400 00:16:35.784 Nvme1n1 : 0.93 207.19 12.95 0.00 0.00 305261.23 20583.16 262532.36 00:16:35.784 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:35.784 Verification LBA range: start 0x0 length 0x400 00:16:35.784 Nvme2n1 : 0.93 205.72 12.86 0.00 0.00 301431.47 19709.35 267192.70 00:16:35.784 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:35.784 Verification LBA range: start 0x0 length 0x400 00:16:35.784 Nvme3n1 : 0.95 269.42 16.84 0.00 0.00 225631.95 19029.71 267192.70 00:16:35.784 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:35.784 Verification LBA range: start 0x0 length 0x400 00:16:35.784 Nvme4n1 : 0.90 212.98 13.31 0.00 0.00 278682.99 20874.43 259425.47 00:16:35.784 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:35.784 Verification LBA range: start 0x0 length 0x400 00:16:35.784 Nvme5n1 : 0.90 212.46 13.28 0.00 0.00 272894.23 19806.44 250104.79 00:16:35.784 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:35.784 Verification LBA range: start 0x0 length 0x400 00:16:35.784 Nvme6n1 : 0.96 268.04 16.75 0.00 0.00 213216.52 22136.60 233016.89 00:16:35.784 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:35.784 Verification LBA range: start 0x0 length 0x400 00:16:35.784 Nvme7n1 : 0.96 267.28 16.71 0.00 0.00 208838.16 20291.89 278066.82 00:16:35.784 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:35.784 Verification LBA range: start 0x0 length 0x400 00:16:35.784 Nvme8n1 : 0.92 212.98 13.31 0.00 0.00 254225.00 2803.48 257872.02 00:16:35.784 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:35.784 Verification LBA range: start 0x0 length 0x400 00:16:35.784 Nvme9n1 : 0.94 203.25 12.70 0.00 0.00 263181.65 20194.80 293601.28 00:16:35.784 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:35.784 Verification LBA range: start 0x0 length 0x400 00:16:35.784 Nvme10n1 : 0.94 203.90 12.74 0.00 0.00 256468.26 22427.88 273406.48 00:16:35.784 =================================================================================================================== 00:16:35.784 Total : 2263.22 141.45 0.00 0.00 254157.13 2803.48 293601.28 00:16:36.041 13:45:38 -- target/shutdown.sh@113 -- # sleep 1 00:16:36.974 13:45:39 -- target/shutdown.sh@114 -- # kill -0 2627343 00:16:36.974 13:45:39 -- target/shutdown.sh@116 -- # stoptarget 00:16:36.974 13:45:39 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:36.974 13:45:39 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:36.974 13:45:39 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:36.974 13:45:39 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:36.974 13:45:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:36.974 13:45:39 -- nvmf/common.sh@117 -- # sync 00:16:36.974 13:45:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:36.974 13:45:39 -- nvmf/common.sh@120 -- # set +e 00:16:36.974 13:45:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:36.974 13:45:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:36.974 rmmod nvme_tcp 00:16:36.974 rmmod nvme_fabrics 00:16:36.974 rmmod nvme_keyring 00:16:36.974 13:45:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:36.974 13:45:39 -- nvmf/common.sh@124 -- # set -e 00:16:36.974 13:45:39 -- nvmf/common.sh@125 -- # return 0 00:16:36.974 13:45:39 -- nvmf/common.sh@478 -- # '[' -n 2627343 ']' 00:16:36.974 13:45:39 -- nvmf/common.sh@479 -- # killprocess 2627343 00:16:36.974 13:45:39 -- common/autotest_common.sh@936 -- # '[' -z 2627343 ']' 00:16:36.974 13:45:39 -- common/autotest_common.sh@940 -- # kill -0 2627343 00:16:36.974 13:45:39 -- common/autotest_common.sh@941 -- # uname 00:16:37.232 13:45:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:37.232 13:45:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2627343 00:16:37.232 13:45:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:37.232 13:45:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:37.232 13:45:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2627343' 00:16:37.232 killing process with pid 2627343 00:16:37.232 13:45:39 -- common/autotest_common.sh@955 -- # kill 2627343 00:16:37.232 13:45:39 -- common/autotest_common.sh@960 -- # wait 2627343 00:16:37.802 13:45:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:37.802 13:45:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:37.802 13:45:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:37.802 13:45:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:37.802 13:45:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:37.802 13:45:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.802 13:45:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.802 13:45:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.708 13:45:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:39.708 00:16:39.708 real 0m7.957s 00:16:39.708 user 0m24.467s 00:16:39.708 sys 0m1.520s 00:16:39.708 13:45:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:39.708 13:45:42 -- common/autotest_common.sh@10 -- # set +x 00:16:39.708 ************************************ 00:16:39.708 END TEST nvmf_shutdown_tc2 00:16:39.708 ************************************ 00:16:39.708 13:45:42 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:16:39.708 13:45:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:39.708 13:45:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:39.708 13:45:42 -- common/autotest_common.sh@10 -- # set +x 00:16:39.968 ************************************ 00:16:39.968 START TEST nvmf_shutdown_tc3 00:16:39.968 ************************************ 00:16:39.968 13:45:42 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:16:39.968 13:45:42 -- target/shutdown.sh@121 -- # starttarget 00:16:39.968 13:45:42 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:39.968 13:45:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:39.968 13:45:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.968 13:45:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:39.968 13:45:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:39.968 13:45:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:39.968 13:45:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.968 13:45:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.968 13:45:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.968 13:45:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:39.968 13:45:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:39.968 13:45:42 -- common/autotest_common.sh@10 -- # set +x 00:16:39.968 13:45:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:39.968 13:45:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:39.968 13:45:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:39.968 13:45:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:39.968 13:45:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:39.968 13:45:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:39.968 13:45:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:39.968 13:45:42 -- nvmf/common.sh@295 -- # net_devs=() 00:16:39.968 13:45:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:39.968 13:45:42 -- nvmf/common.sh@296 -- # e810=() 00:16:39.968 13:45:42 -- nvmf/common.sh@296 -- # local -ga e810 00:16:39.968 13:45:42 -- nvmf/common.sh@297 -- # x722=() 00:16:39.968 13:45:42 -- nvmf/common.sh@297 -- # local -ga x722 00:16:39.968 13:45:42 -- nvmf/common.sh@298 -- # mlx=() 00:16:39.968 13:45:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:39.968 13:45:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.968 13:45:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.968 13:45:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.968 13:45:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.968 13:45:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.968 13:45:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.968 13:45:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.968 13:45:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.968 13:45:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.968 13:45:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.968 13:45:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.968 13:45:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:39.968 13:45:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:39.968 13:45:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:39.968 13:45:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:39.968 13:45:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:39.968 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:39.968 13:45:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:39.968 13:45:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:39.968 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:39.968 13:45:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:39.968 13:45:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:39.968 13:45:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.968 13:45:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:39.968 13:45:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.968 13:45:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:39.968 Found net devices under 0000:84:00.0: cvl_0_0 00:16:39.968 13:45:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.968 13:45:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:39.968 13:45:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.968 13:45:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:39.968 13:45:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.968 13:45:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:39.968 Found net devices under 0000:84:00.1: cvl_0_1 00:16:39.968 13:45:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.968 13:45:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:39.968 13:45:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:39.968 13:45:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:39.968 13:45:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:39.968 13:45:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.968 13:45:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.968 13:45:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.968 13:45:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:39.968 13:45:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.968 13:45:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.968 13:45:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:39.968 13:45:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.968 13:45:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.968 13:45:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:39.968 13:45:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:39.968 13:45:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:39.968 13:45:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.968 13:45:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.968 13:45:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.968 13:45:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:39.968 13:45:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.968 13:45:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.968 13:45:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.968 13:45:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:39.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:16:39.968 00:16:39.968 --- 10.0.0.2 ping statistics --- 00:16:39.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.968 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:16:39.968 13:45:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:16:39.968 00:16:39.968 --- 10.0.0.1 ping statistics --- 00:16:39.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.968 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:16:39.968 13:45:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.969 13:45:42 -- nvmf/common.sh@411 -- # return 0 00:16:39.969 13:45:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:39.969 13:45:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.969 13:45:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:39.969 13:45:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:39.969 13:45:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.969 13:45:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:39.969 13:45:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:39.969 13:45:42 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:39.969 13:45:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:39.969 13:45:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:39.969 13:45:42 -- common/autotest_common.sh@10 -- # set +x 00:16:39.969 13:45:42 -- nvmf/common.sh@470 -- # nvmfpid=2628447 00:16:39.969 13:45:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:39.969 13:45:42 -- nvmf/common.sh@471 -- # waitforlisten 2628447 00:16:39.969 13:45:42 -- common/autotest_common.sh@817 -- # '[' -z 2628447 ']' 00:16:39.969 13:45:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.969 13:45:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:39.969 13:45:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.969 13:45:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:39.969 13:45:42 -- common/autotest_common.sh@10 -- # set +x 00:16:39.969 [2024-04-18 13:45:42.745457] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:39.969 [2024-04-18 13:45:42.745538] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.228 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.228 [2024-04-18 13:45:42.811840] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.228 [2024-04-18 13:45:42.921049] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.228 [2024-04-18 13:45:42.921107] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.228 [2024-04-18 13:45:42.921121] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.228 [2024-04-18 13:45:42.921133] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.228 [2024-04-18 13:45:42.921151] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.229 [2024-04-18 13:45:42.921235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.229 [2024-04-18 13:45:42.921297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.229 [2024-04-18 13:45:42.921365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:40.229 [2024-04-18 13:45:42.921368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.487 13:45:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:40.487 13:45:43 -- common/autotest_common.sh@850 -- # return 0 00:16:40.487 13:45:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:40.487 13:45:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:40.487 13:45:43 -- common/autotest_common.sh@10 -- # set +x 00:16:40.487 13:45:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.487 13:45:43 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.487 13:45:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.487 13:45:43 -- common/autotest_common.sh@10 -- # set +x 00:16:40.487 [2024-04-18 13:45:43.080169] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.487 13:45:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.487 13:45:43 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:40.487 13:45:43 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:40.488 13:45:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:40.488 13:45:43 -- common/autotest_common.sh@10 -- # set +x 00:16:40.488 13:45:43 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:40.488 13:45:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:40.488 13:45:43 -- target/shutdown.sh@28 -- # cat 00:16:40.488 13:45:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:40.488 13:45:43 -- target/shutdown.sh@28 -- # cat 00:16:40.488 13:45:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:40.488 13:45:43 -- target/shutdown.sh@28 -- # cat 00:16:40.488 13:45:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:40.488 13:45:43 -- target/shutdown.sh@28 -- # cat 00:16:40.488 13:45:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:40.488 13:45:43 -- target/shutdown.sh@28 -- # cat 00:16:40.488 13:45:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:40.488 13:45:43 -- target/shutdown.sh@28 -- # cat 00:16:40.488 13:45:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:40.488 13:45:43 -- target/shutdown.sh@28 -- # cat 00:16:40.488 13:45:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:40.488 13:45:43 -- target/shutdown.sh@28 -- # cat 00:16:40.488 13:45:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:40.488 13:45:43 -- target/shutdown.sh@28 -- # cat 00:16:40.488 13:45:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:40.488 13:45:43 -- target/shutdown.sh@28 -- # cat 00:16:40.488 13:45:43 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:40.488 13:45:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.488 13:45:43 -- common/autotest_common.sh@10 -- # set +x 00:16:40.488 Malloc1 00:16:40.488 [2024-04-18 13:45:43.169979] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.488 Malloc2 00:16:40.488 Malloc3 00:16:40.744 Malloc4 00:16:40.744 Malloc5 00:16:40.744 Malloc6 00:16:40.744 Malloc7 00:16:40.744 Malloc8 00:16:40.744 Malloc9 00:16:41.002 Malloc10 00:16:41.002 13:45:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.002 13:45:43 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:41.002 13:45:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:41.002 13:45:43 -- common/autotest_common.sh@10 -- # set +x 00:16:41.002 13:45:43 -- target/shutdown.sh@125 -- # perfpid=2628598 00:16:41.002 13:45:43 -- target/shutdown.sh@126 -- # waitforlisten 2628598 /var/tmp/bdevperf.sock 00:16:41.002 13:45:43 -- common/autotest_common.sh@817 -- # '[' -z 2628598 ']' 00:16:41.002 13:45:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:41.002 13:45:43 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:41.002 13:45:43 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:41.002 13:45:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:41.002 13:45:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:41.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:41.002 13:45:43 -- nvmf/common.sh@521 -- # config=() 00:16:41.002 13:45:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:41.002 13:45:43 -- nvmf/common.sh@521 -- # local subsystem config 00:16:41.002 13:45:43 -- common/autotest_common.sh@10 -- # set +x 00:16:41.002 13:45:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:41.002 { 00:16:41.002 "params": { 00:16:41.002 "name": "Nvme$subsystem", 00:16:41.002 "trtype": "$TEST_TRANSPORT", 00:16:41.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.002 "adrfam": "ipv4", 00:16:41.002 "trsvcid": "$NVMF_PORT", 00:16:41.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.002 "hdgst": ${hdgst:-false}, 00:16:41.002 "ddgst": ${ddgst:-false} 00:16:41.002 }, 00:16:41.002 "method": "bdev_nvme_attach_controller" 00:16:41.002 } 00:16:41.002 EOF 00:16:41.002 )") 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # cat 00:16:41.002 13:45:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:41.002 { 00:16:41.002 "params": { 00:16:41.002 "name": "Nvme$subsystem", 00:16:41.002 "trtype": "$TEST_TRANSPORT", 00:16:41.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.002 "adrfam": "ipv4", 00:16:41.002 "trsvcid": "$NVMF_PORT", 00:16:41.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.002 "hdgst": ${hdgst:-false}, 00:16:41.002 "ddgst": ${ddgst:-false} 00:16:41.002 }, 00:16:41.002 "method": "bdev_nvme_attach_controller" 00:16:41.002 } 00:16:41.002 EOF 00:16:41.002 )") 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # cat 00:16:41.002 13:45:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:41.002 { 00:16:41.002 "params": { 00:16:41.002 "name": "Nvme$subsystem", 00:16:41.002 "trtype": "$TEST_TRANSPORT", 00:16:41.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.002 "adrfam": "ipv4", 00:16:41.002 "trsvcid": "$NVMF_PORT", 00:16:41.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.002 "hdgst": ${hdgst:-false}, 00:16:41.002 "ddgst": ${ddgst:-false} 00:16:41.002 }, 00:16:41.002 "method": "bdev_nvme_attach_controller" 00:16:41.002 } 00:16:41.002 EOF 00:16:41.002 )") 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # cat 00:16:41.002 13:45:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:41.002 { 00:16:41.002 "params": { 00:16:41.002 "name": "Nvme$subsystem", 00:16:41.002 "trtype": "$TEST_TRANSPORT", 00:16:41.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.002 "adrfam": "ipv4", 00:16:41.002 "trsvcid": "$NVMF_PORT", 00:16:41.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.002 "hdgst": ${hdgst:-false}, 00:16:41.002 "ddgst": ${ddgst:-false} 00:16:41.002 }, 00:16:41.002 "method": "bdev_nvme_attach_controller" 00:16:41.002 } 00:16:41.002 EOF 00:16:41.002 )") 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # cat 00:16:41.002 13:45:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:41.002 { 00:16:41.002 "params": { 00:16:41.002 "name": "Nvme$subsystem", 00:16:41.002 "trtype": "$TEST_TRANSPORT", 00:16:41.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.002 "adrfam": "ipv4", 00:16:41.002 "trsvcid": "$NVMF_PORT", 00:16:41.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.002 "hdgst": ${hdgst:-false}, 00:16:41.002 "ddgst": ${ddgst:-false} 00:16:41.002 }, 00:16:41.002 "method": "bdev_nvme_attach_controller" 00:16:41.002 } 00:16:41.002 EOF 00:16:41.002 )") 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # cat 00:16:41.002 13:45:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:41.002 { 00:16:41.002 "params": { 00:16:41.002 "name": "Nvme$subsystem", 00:16:41.002 "trtype": "$TEST_TRANSPORT", 00:16:41.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.002 "adrfam": "ipv4", 00:16:41.002 "trsvcid": "$NVMF_PORT", 00:16:41.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.002 "hdgst": ${hdgst:-false}, 00:16:41.002 "ddgst": ${ddgst:-false} 00:16:41.002 }, 00:16:41.002 "method": "bdev_nvme_attach_controller" 00:16:41.002 } 00:16:41.002 EOF 00:16:41.002 )") 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # cat 00:16:41.002 13:45:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:41.002 { 00:16:41.002 "params": { 00:16:41.002 "name": "Nvme$subsystem", 00:16:41.002 "trtype": "$TEST_TRANSPORT", 00:16:41.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.002 "adrfam": "ipv4", 00:16:41.002 "trsvcid": "$NVMF_PORT", 00:16:41.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.002 "hdgst": ${hdgst:-false}, 00:16:41.002 "ddgst": ${ddgst:-false} 00:16:41.002 }, 00:16:41.002 "method": "bdev_nvme_attach_controller" 00:16:41.002 } 00:16:41.002 EOF 00:16:41.002 )") 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # cat 00:16:41.002 13:45:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:41.002 { 00:16:41.002 "params": { 00:16:41.002 "name": "Nvme$subsystem", 00:16:41.002 "trtype": "$TEST_TRANSPORT", 00:16:41.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.002 "adrfam": "ipv4", 00:16:41.002 "trsvcid": "$NVMF_PORT", 00:16:41.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.002 "hdgst": ${hdgst:-false}, 00:16:41.002 "ddgst": ${ddgst:-false} 00:16:41.002 }, 00:16:41.002 "method": "bdev_nvme_attach_controller" 00:16:41.002 } 00:16:41.002 EOF 00:16:41.002 )") 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # cat 00:16:41.002 13:45:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:41.002 { 00:16:41.002 "params": { 00:16:41.002 "name": "Nvme$subsystem", 00:16:41.002 "trtype": "$TEST_TRANSPORT", 00:16:41.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.002 "adrfam": "ipv4", 00:16:41.002 "trsvcid": "$NVMF_PORT", 00:16:41.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.002 "hdgst": ${hdgst:-false}, 00:16:41.002 "ddgst": ${ddgst:-false} 00:16:41.002 }, 00:16:41.002 "method": "bdev_nvme_attach_controller" 00:16:41.002 } 00:16:41.002 EOF 00:16:41.002 )") 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # cat 00:16:41.002 13:45:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:41.002 { 00:16:41.002 "params": { 00:16:41.002 "name": "Nvme$subsystem", 00:16:41.002 "trtype": "$TEST_TRANSPORT", 00:16:41.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.002 "adrfam": "ipv4", 00:16:41.002 "trsvcid": "$NVMF_PORT", 00:16:41.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.002 "hdgst": ${hdgst:-false}, 00:16:41.002 "ddgst": ${ddgst:-false} 00:16:41.002 }, 00:16:41.002 "method": "bdev_nvme_attach_controller" 00:16:41.002 } 00:16:41.002 EOF 00:16:41.002 )") 00:16:41.002 13:45:43 -- nvmf/common.sh@543 -- # cat 00:16:41.002 13:45:43 -- nvmf/common.sh@545 -- # jq . 00:16:41.002 13:45:43 -- nvmf/common.sh@546 -- # IFS=, 00:16:41.002 13:45:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:41.002 "params": { 00:16:41.002 "name": "Nvme1", 00:16:41.002 "trtype": "tcp", 00:16:41.002 "traddr": "10.0.0.2", 00:16:41.002 "adrfam": "ipv4", 00:16:41.002 "trsvcid": "4420", 00:16:41.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:41.002 "hdgst": false, 00:16:41.002 "ddgst": false 00:16:41.002 }, 00:16:41.003 "method": "bdev_nvme_attach_controller" 00:16:41.003 },{ 00:16:41.003 "params": { 00:16:41.003 "name": "Nvme2", 00:16:41.003 "trtype": "tcp", 00:16:41.003 "traddr": "10.0.0.2", 00:16:41.003 "adrfam": "ipv4", 00:16:41.003 "trsvcid": "4420", 00:16:41.003 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:41.003 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:41.003 "hdgst": false, 00:16:41.003 "ddgst": false 00:16:41.003 }, 00:16:41.003 "method": "bdev_nvme_attach_controller" 00:16:41.003 },{ 00:16:41.003 "params": { 00:16:41.003 "name": "Nvme3", 00:16:41.003 "trtype": "tcp", 00:16:41.003 "traddr": "10.0.0.2", 00:16:41.003 "adrfam": "ipv4", 00:16:41.003 "trsvcid": "4420", 00:16:41.003 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:41.003 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:41.003 "hdgst": false, 00:16:41.003 "ddgst": false 00:16:41.003 }, 00:16:41.003 "method": "bdev_nvme_attach_controller" 00:16:41.003 },{ 00:16:41.003 "params": { 00:16:41.003 "name": "Nvme4", 00:16:41.003 "trtype": "tcp", 00:16:41.003 "traddr": "10.0.0.2", 00:16:41.003 "adrfam": "ipv4", 00:16:41.003 "trsvcid": "4420", 00:16:41.003 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:41.003 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:41.003 "hdgst": false, 00:16:41.003 "ddgst": false 00:16:41.003 }, 00:16:41.003 "method": "bdev_nvme_attach_controller" 00:16:41.003 },{ 00:16:41.003 "params": { 00:16:41.003 "name": "Nvme5", 00:16:41.003 "trtype": "tcp", 00:16:41.003 "traddr": "10.0.0.2", 00:16:41.003 "adrfam": "ipv4", 00:16:41.003 "trsvcid": "4420", 00:16:41.003 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:41.003 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:41.003 "hdgst": false, 00:16:41.003 "ddgst": false 00:16:41.003 }, 00:16:41.003 "method": "bdev_nvme_attach_controller" 00:16:41.003 },{ 00:16:41.003 "params": { 00:16:41.003 "name": "Nvme6", 00:16:41.003 "trtype": "tcp", 00:16:41.003 "traddr": "10.0.0.2", 00:16:41.003 "adrfam": "ipv4", 00:16:41.003 "trsvcid": "4420", 00:16:41.003 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:41.003 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:41.003 "hdgst": false, 00:16:41.003 "ddgst": false 00:16:41.003 }, 00:16:41.003 "method": "bdev_nvme_attach_controller" 00:16:41.003 },{ 00:16:41.003 "params": { 00:16:41.003 "name": "Nvme7", 00:16:41.003 "trtype": "tcp", 00:16:41.003 "traddr": "10.0.0.2", 00:16:41.003 "adrfam": "ipv4", 00:16:41.003 "trsvcid": "4420", 00:16:41.003 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:41.003 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:41.003 "hdgst": false, 00:16:41.003 "ddgst": false 00:16:41.003 }, 00:16:41.003 "method": "bdev_nvme_attach_controller" 00:16:41.003 },{ 00:16:41.003 "params": { 00:16:41.003 "name": "Nvme8", 00:16:41.003 "trtype": "tcp", 00:16:41.003 "traddr": "10.0.0.2", 00:16:41.003 "adrfam": "ipv4", 00:16:41.003 "trsvcid": "4420", 00:16:41.003 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:41.003 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:41.003 "hdgst": false, 00:16:41.003 "ddgst": false 00:16:41.003 }, 00:16:41.003 "method": "bdev_nvme_attach_controller" 00:16:41.003 },{ 00:16:41.003 "params": { 00:16:41.003 "name": "Nvme9", 00:16:41.003 "trtype": "tcp", 00:16:41.003 "traddr": "10.0.0.2", 00:16:41.003 "adrfam": "ipv4", 00:16:41.003 "trsvcid": "4420", 00:16:41.003 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:41.003 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:41.003 "hdgst": false, 00:16:41.003 "ddgst": false 00:16:41.003 }, 00:16:41.003 "method": "bdev_nvme_attach_controller" 00:16:41.003 },{ 00:16:41.003 "params": { 00:16:41.003 "name": "Nvme10", 00:16:41.003 "trtype": "tcp", 00:16:41.003 "traddr": "10.0.0.2", 00:16:41.003 "adrfam": "ipv4", 00:16:41.003 "trsvcid": "4420", 00:16:41.003 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:41.003 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:41.003 "hdgst": false, 00:16:41.003 "ddgst": false 00:16:41.003 }, 00:16:41.003 "method": "bdev_nvme_attach_controller" 00:16:41.003 }' 00:16:41.003 [2024-04-18 13:45:43.680113] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:41.003 [2024-04-18 13:45:43.680233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2628598 ] 00:16:41.003 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.003 [2024-04-18 13:45:43.744207] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.261 [2024-04-18 13:45:43.853607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.638 Running I/O for 10 seconds... 00:16:42.894 13:45:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:42.894 13:45:45 -- common/autotest_common.sh@850 -- # return 0 00:16:42.894 13:45:45 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:42.894 13:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.894 13:45:45 -- common/autotest_common.sh@10 -- # set +x 00:16:42.894 13:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.894 13:45:45 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:42.895 13:45:45 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:42.895 13:45:45 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:42.895 13:45:45 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:16:42.895 13:45:45 -- target/shutdown.sh@57 -- # local ret=1 00:16:42.895 13:45:45 -- target/shutdown.sh@58 -- # local i 00:16:42.895 13:45:45 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:16:42.895 13:45:45 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:42.895 13:45:45 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:42.895 13:45:45 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:42.895 13:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.895 13:45:45 -- common/autotest_common.sh@10 -- # set +x 00:16:43.152 13:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.152 13:45:45 -- target/shutdown.sh@60 -- # read_io_count=16 00:16:43.152 13:45:45 -- target/shutdown.sh@63 -- # '[' 16 -ge 100 ']' 00:16:43.152 13:45:45 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:43.433 13:45:45 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:43.433 13:45:45 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:43.433 13:45:45 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:43.433 13:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.433 13:45:45 -- common/autotest_common.sh@10 -- # set +x 00:16:43.433 13:45:45 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:43.433 13:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.433 13:45:46 -- target/shutdown.sh@60 -- # read_io_count=131 00:16:43.433 13:45:46 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:16:43.433 13:45:46 -- target/shutdown.sh@64 -- # ret=0 00:16:43.433 13:45:46 -- target/shutdown.sh@65 -- # break 00:16:43.433 13:45:46 -- target/shutdown.sh@69 -- # return 0 00:16:43.433 13:45:46 -- target/shutdown.sh@135 -- # killprocess 2628447 00:16:43.433 13:45:46 -- common/autotest_common.sh@936 -- # '[' -z 2628447 ']' 00:16:43.433 13:45:46 -- common/autotest_common.sh@940 -- # kill -0 2628447 00:16:43.433 13:45:46 -- common/autotest_common.sh@941 -- # uname 00:16:43.433 13:45:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:43.433 13:45:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2628447 00:16:43.433 13:45:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:43.433 13:45:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:43.433 13:45:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2628447' 00:16:43.433 killing process with pid 2628447 00:16:43.433 13:45:46 -- common/autotest_common.sh@955 -- # kill 2628447 00:16:43.433 13:45:46 -- common/autotest_common.sh@960 -- # wait 2628447 00:16:43.433 [2024-04-18 13:45:46.072103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072618] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.433 [2024-04-18 13:45:46.072630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072693] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.072998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.073010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.073022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.073034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a490 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074606] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.074987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.075005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.075018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.075031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.075043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.075056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.075068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.075080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.434 [2024-04-18 13:45:46.075093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.075106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.075118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.075130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.075142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.075154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.075166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.075201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.075217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.075234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.075246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.075259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.075271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.075285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cdc0 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.076994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.435 [2024-04-18 13:45:46.077455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.077468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a920 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.077811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.436 [2024-04-18 13:45:46.077851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.436 [2024-04-18 13:45:46.077869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.436 [2024-04-18 13:45:46.077883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.436 [2024-04-18 13:45:46.077903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.436 [2024-04-18 13:45:46.077916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.436 [2024-04-18 13:45:46.077929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.436 [2024-04-18 13:45:46.077942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.436 [2024-04-18 13:45:46.077955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175a190 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.078044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.436 [2024-04-18 13:45:46.078064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.436 [2024-04-18 13:45:46.078079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.436 [2024-04-18 13:45:46.078092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.436 [2024-04-18 13:45:46.078105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.436 [2024-04-18 13:45:46.078118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.436 [2024-04-18 13:45:46.078132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.436 [2024-04-18 13:45:46.078145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.436 [2024-04-18 13:45:46.078159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f47b0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.079993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.080006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.080020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.080041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.436 [2024-04-18 13:45:46.080053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125adb0 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081516] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081577] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081589] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.081996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.082008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.082020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.082032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.082047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.082060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.082073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.082086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.082098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.082110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.082122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.082134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.082146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.082158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.082170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.082205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b240 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b6d0 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083952] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.437 [2024-04-18 13:45:46.083964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.083977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.083989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.084351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb80 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.085993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.086006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.086019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.086032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.086044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.086057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.086069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.086081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.438 [2024-04-18 13:45:46.086094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086284] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.086446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c010 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087503] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087606] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.087999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088091] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.439 [2024-04-18 13:45:46.088337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.088349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.088361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.088374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.088387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c4a0 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089377] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089615] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089804] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089889] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.089990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.090002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.090014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.090026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125c930 is same with the state(5) to be set 00:16:43.440 [2024-04-18 13:45:46.103777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.440 [2024-04-18 13:45:46.103873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.440 [2024-04-18 13:45:46.103910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.440 [2024-04-18 13:45:46.103927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.440 [2024-04-18 13:45:46.103944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.440 [2024-04-18 13:45:46.103967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.440 [2024-04-18 13:45:46.103984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.440 [2024-04-18 13:45:46.104000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.440 [2024-04-18 13:45:46.104015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.440 [2024-04-18 13:45:46.104030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.440 [2024-04-18 13:45:46.104046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.440 [2024-04-18 13:45:46.104060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.440 [2024-04-18 13:45:46.104076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.440 [2024-04-18 13:45:46.104092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.440 [2024-04-18 13:45:46.104109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.440 [2024-04-18 13:45:46.104124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.440 [2024-04-18 13:45:46.104140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.104972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.104986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.105003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.105026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.105043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.105057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.105074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.105088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.105105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.105119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.105136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.105150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.105166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.105185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.105203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.105228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.105245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.105260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.105275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.105289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.105305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.105319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.105335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.105349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.105365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.105379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.441 [2024-04-18 13:45:46.105395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.441 [2024-04-18 13:45:46.105409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.442 [2024-04-18 13:45:46.105443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.442 [2024-04-18 13:45:46.105472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.442 [2024-04-18 13:45:46.105509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.442 [2024-04-18 13:45:46.105538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.442 [2024-04-18 13:45:46.105568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.442 [2024-04-18 13:45:46.105598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.442 [2024-04-18 13:45:46.105629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.442 [2024-04-18 13:45:46.105661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.442 [2024-04-18 13:45:46.105692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.442 [2024-04-18 13:45:46.105723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.442 [2024-04-18 13:45:46.105753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.442 [2024-04-18 13:45:46.105784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.442 [2024-04-18 13:45:46.105817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.442 [2024-04-18 13:45:46.105849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.442 [2024-04-18 13:45:46.105879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.442 [2024-04-18 13:45:46.105909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.105974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:16:43.442 [2024-04-18 13:45:46.106067] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1622c50 was disconnected and freed. reset controller. 00:16:43.442 [2024-04-18 13:45:46.106495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175a190 (9): Bad file descriptor 00:16:43.442 [2024-04-18 13:45:46.106566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.442 [2024-04-18 13:45:46.106587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.106603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.442 [2024-04-18 13:45:46.106618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.106632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.442 [2024-04-18 13:45:46.106646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.106660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.442 [2024-04-18 13:45:46.106673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.106686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162d820 is same with the state(5) to be set 00:16:43.442 [2024-04-18 13:45:46.106736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.442 [2024-04-18 13:45:46.106757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.106773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.442 [2024-04-18 13:45:46.106787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.106801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.442 [2024-04-18 13:45:46.106815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.106835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.442 [2024-04-18 13:45:46.106849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.106862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164e5d0 is same with the state(5) to be set 00:16:43.442 [2024-04-18 13:45:46.106921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.442 [2024-04-18 13:45:46.106942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.106958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.442 [2024-04-18 13:45:46.106972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.106987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.442 [2024-04-18 13:45:46.107001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.107016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.442 [2024-04-18 13:45:46.107029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.107042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dc2b0 is same with the state(5) to be set 00:16:43.442 [2024-04-18 13:45:46.107089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.442 [2024-04-18 13:45:46.107109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.107124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.442 [2024-04-18 13:45:46.107138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.442 [2024-04-18 13:45:46.107152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.442 [2024-04-18 13:45:46.107165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e30c0 is same with the state(5) to be set 00:16:43.443 [2024-04-18 13:45:46.107249] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f47b0 (9): Bad file descriptor 00:16:43.443 [2024-04-18 13:45:46.107299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1638000 is same with the state(5) to be set 00:16:43.443 [2024-04-18 13:45:46.107478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3870 is same with the state(5) to be set 00:16:43.443 [2024-04-18 13:45:46.107652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d8390 is same with the state(5) to be set 00:16:43.443 [2024-04-18 13:45:46.107825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.443 [2024-04-18 13:45:46.107941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.107955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c85f0 is same with the state(5) to be set 00:16:43.443 [2024-04-18 13:45:46.108373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.108978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.443 [2024-04-18 13:45:46.108995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.443 [2024-04-18 13:45:46.109009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.109972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.109986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.110003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.110017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.110034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.110048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.110064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.110079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.110095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.110110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.110126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.110140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.110157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.110171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.110194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.110221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.110237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.110251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.110267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.444 [2024-04-18 13:45:46.110282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.444 [2024-04-18 13:45:46.110302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.110317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.110333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.110348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.110364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.110378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.110394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.110408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.110425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.110439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.110454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bed30 is same with the state(5) to be set 00:16:43.445 [2024-04-18 13:45:46.110545] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17bed30 was disconnected and freed. reset controller. 00:16:43.445 [2024-04-18 13:45:46.111999] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:16:43.445 [2024-04-18 13:45:46.112036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3870 (9): Bad file descriptor 00:16:43.445 [2024-04-18 13:45:46.113700] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:43.445 [2024-04-18 13:45:46.113742] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:16:43.445 [2024-04-18 13:45:46.113769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dc2b0 (9): Bad file descriptor 00:16:43.445 [2024-04-18 13:45:46.113855] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:43.445 [2024-04-18 13:45:46.113925] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:43.445 [2024-04-18 13:45:46.114819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.445 [2024-04-18 13:45:46.114973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.445 [2024-04-18 13:45:46.114999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3870 with addr=10.0.0.2, port=4420 00:16:43.445 [2024-04-18 13:45:46.115017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3870 is same with the state(5) to be set 00:16:43.445 [2024-04-18 13:45:46.115124] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:43.445 [2024-04-18 13:45:46.115517] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:43.445 [2024-04-18 13:45:46.115601] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:43.445 [2024-04-18 13:45:46.115678] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:43.445 [2024-04-18 13:45:46.115828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.445 [2024-04-18 13:45:46.116011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.445 [2024-04-18 13:45:46.116036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dc2b0 with addr=10.0.0.2, port=4420 00:16:43.445 [2024-04-18 13:45:46.116060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dc2b0 is same with the state(5) to be set 00:16:43.445 [2024-04-18 13:45:46.116079] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3870 (9): Bad file descriptor 00:16:43.445 [2024-04-18 13:45:46.116167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.116977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.116996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.445 [2024-04-18 13:45:46.117013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.445 [2024-04-18 13:45:46.117028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.117975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.117990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.118006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.118020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.118037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.118052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.118068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.118083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.118099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.118113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.118135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.118150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.118167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.118187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.118208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.118224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.118241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.446 [2024-04-18 13:45:46.118255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.446 [2024-04-18 13:45:46.118270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bd880 is same with the state(5) to be set 00:16:43.446 [2024-04-18 13:45:46.118357] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17bd880 was disconnected and freed. reset controller. 00:16:43.446 [2024-04-18 13:45:46.118466] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dc2b0 (9): Bad file descriptor 00:16:43.446 [2024-04-18 13:45:46.118493] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:16:43.447 [2024-04-18 13:45:46.118514] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:16:43.447 [2024-04-18 13:45:46.118531] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:16:43.447 [2024-04-18 13:45:46.118598] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162d820 (9): Bad file descriptor 00:16:43.447 [2024-04-18 13:45:46.118633] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164e5d0 (9): Bad file descriptor 00:16:43.447 [2024-04-18 13:45:46.118661] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e30c0 (9): Bad file descriptor 00:16:43.447 [2024-04-18 13:45:46.118698] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1638000 (9): Bad file descriptor 00:16:43.447 [2024-04-18 13:45:46.118729] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d8390 (9): Bad file descriptor 00:16:43.447 [2024-04-18 13:45:46.118759] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c85f0 (9): Bad file descriptor 00:16:43.447 [2024-04-18 13:45:46.119988] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:43.447 [2024-04-18 13:45:46.120025] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:16:43.447 [2024-04-18 13:45:46.120057] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:16:43.447 [2024-04-18 13:45:46.120074] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:16:43.447 [2024-04-18 13:45:46.120087] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:16:43.447 [2024-04-18 13:45:46.120156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.120983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.120999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.447 [2024-04-18 13:45:46.121016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.447 [2024-04-18 13:45:46.121030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.121985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.121999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.122015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.122029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.122044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.122059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.122075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.122088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.122104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.122123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.122140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.122154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.122169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171fc60 is same with the state(5) to be set 00:16:43.448 [2024-04-18 13:45:46.123478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.123502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.123524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.448 [2024-04-18 13:45:46.123544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.448 [2024-04-18 13:45:46.123562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.123576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.123593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.123607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.123623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.123637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.123653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.123667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.123683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.123698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.123714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.123728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.123744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.123759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.123775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.123790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.123806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.123820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.123836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.123850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.123866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.123881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.123897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.123911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.123936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.123951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.123968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.123982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.123998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.449 [2024-04-18 13:45:46.124703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.449 [2024-04-18 13:45:46.124717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.124738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.124753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.124769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.124783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.124799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.124818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.124835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.124849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.124865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.124880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.124896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.124910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.124927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.124941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.124957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.124971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.124986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.125001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.125018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.125032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.134232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.134293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.134310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.134325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.134342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.134369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.134386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.134400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.134416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.134431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.134448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.134462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.134478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.134492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.134508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.134523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.134538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.134554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.134571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.134585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.134601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.134616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.134632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.134646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.134662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.134677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.134693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.134708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.134724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.134738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.134758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717dc0 is same with the state(5) to be set 00:16:43.450 [2024-04-18 13:45:46.137084] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:43.450 [2024-04-18 13:45:46.137117] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:43.450 [2024-04-18 13:45:46.137146] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:16:43.450 [2024-04-18 13:45:46.137475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.450 [2024-04-18 13:45:46.137655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.450 [2024-04-18 13:45:46.137681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x164e5d0 with addr=10.0.0.2, port=4420 00:16:43.450 [2024-04-18 13:45:46.137699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164e5d0 is same with the state(5) to be set 00:16:43.450 [2024-04-18 13:45:46.137811] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164e5d0 (9): Bad file descriptor 00:16:43.450 [2024-04-18 13:45:46.138419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.450 [2024-04-18 13:45:46.138601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.450 [2024-04-18 13:45:46.138626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f47b0 with addr=10.0.0.2, port=4420 00:16:43.450 [2024-04-18 13:45:46.138642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f47b0 is same with the state(5) to be set 00:16:43.450 [2024-04-18 13:45:46.138796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.450 [2024-04-18 13:45:46.139001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.450 [2024-04-18 13:45:46.139028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175a190 with addr=10.0.0.2, port=4420 00:16:43.450 [2024-04-18 13:45:46.139045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175a190 is same with the state(5) to be set 00:16:43.450 [2024-04-18 13:45:46.139403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.139429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.139456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.139472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.139489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.139504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.139520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.139534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.139551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.139566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.139582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.139605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.139622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.450 [2024-04-18 13:45:46.139637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.450 [2024-04-18 13:45:46.139654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.139668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.139684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.139699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.139715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.139730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.139746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.139760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.139776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.139790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.139807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.139821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.139837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.139851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.139867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.139881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.139897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.139911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.139927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.139942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.139958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.139972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.139989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.451 [2024-04-18 13:45:46.140632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.451 [2024-04-18 13:45:46.140648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.140662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.140678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.140691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.140707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.140721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.140737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.140752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.140768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.140786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.140803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.140818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.140834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.140848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.140865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.140879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.140895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.140909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.140925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.140940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.140956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.140970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.140986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.141000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.141016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.141030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.141046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.141060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.141076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.141090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.141106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.141120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.141136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.141150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.141170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.141193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.141211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.141225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.141241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.141255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.141273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.141287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.141303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.141317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.141333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.141347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.141363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.141376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.141392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.141406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.141421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1720e80 is same with the state(5) to be set 00:16:43.452 [2024-04-18 13:45:46.142687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.142710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.142731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.142747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.142763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.142778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.142794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.142807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.142829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.142844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.142860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.142874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.142890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.142904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.142920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.142934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.142950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.142964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.142980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.142994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.143011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.143026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.143043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.143057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.143074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.143088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.143103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.452 [2024-04-18 13:45:46.143117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.452 [2024-04-18 13:45:46.143134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.143970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.143989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.144006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.144021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.144038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.144052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.144068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.144083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.144099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.144114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.144130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.144144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.144161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.144190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.144209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.144223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.144240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.144257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.144273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.144287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.144303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.144317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.144334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.144348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.453 [2024-04-18 13:45:46.144364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.453 [2024-04-18 13:45:46.144378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.144399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.144414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.144430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.144444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.144460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.144475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.144491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.144506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.144522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.144536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.144552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.144567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.144583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.144597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.144613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.144627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.144643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.144657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.144673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.144687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.144702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17222f0 is same with the state(5) to be set 00:16:43.454 [2024-04-18 13:45:46.145969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.145993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.454 [2024-04-18 13:45:46.146854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.454 [2024-04-18 13:45:46.146868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.146884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.146898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.146914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.146928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.146944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.146958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.146974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.146988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.147944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.147959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1624100 is same with the state(5) to be set 00:16:43.455 [2024-04-18 13:45:46.149208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.149236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.149259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.149274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.149291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.455 [2024-04-18 13:45:46.149306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.455 [2024-04-18 13:45:46.149322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.149982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.149999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.150017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.150033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.150047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.150063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.150077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.150093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.150107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.150123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.150136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.150153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.150167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.150190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.150206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.150222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.150237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.150253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.150267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.150284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.150298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.150314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.150328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.150345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.150359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.456 [2024-04-18 13:45:46.150375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.456 [2024-04-18 13:45:46.150389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.150978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.150995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.151009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.151024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.151038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.151054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.151068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.151084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.151098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.151114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.151128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.151144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.151158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.151184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.151202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.151216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16255b0 is same with the state(5) to be set 00:16:43.457 [2024-04-18 13:45:46.152453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.152476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.152498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.152514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.152531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.152545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.152561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.152575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.152591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.152606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.152623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.152637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.152653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.152668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.152684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.152699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.152715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.152729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.152745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.152760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.152776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.152790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.152811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.152826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.457 [2024-04-18 13:45:46.152843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.457 [2024-04-18 13:45:46.152858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.152875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.152889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.152905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.152919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.152936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.152950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.152967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.152981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.152997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.153972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.153987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.154005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.154022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.154036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.154052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.154065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.154081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.458 [2024-04-18 13:45:46.154094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.458 [2024-04-18 13:45:46.154110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.459 [2024-04-18 13:45:46.154124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.459 [2024-04-18 13:45:46.154145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.459 [2024-04-18 13:45:46.154159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.459 [2024-04-18 13:45:46.154180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.459 [2024-04-18 13:45:46.154201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.459 [2024-04-18 13:45:46.154221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.459 [2024-04-18 13:45:46.154235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.459 [2024-04-18 13:45:46.154251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.459 [2024-04-18 13:45:46.154264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.459 [2024-04-18 13:45:46.154280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.459 [2024-04-18 13:45:46.154294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.459 [2024-04-18 13:45:46.154310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.459 [2024-04-18 13:45:46.154324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.459 [2024-04-18 13:45:46.154340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.459 [2024-04-18 13:45:46.154353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.459 [2024-04-18 13:45:46.154369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.459 [2024-04-18 13:45:46.154383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.459 [2024-04-18 13:45:46.154402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.459 [2024-04-18 13:45:46.154416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.459 [2024-04-18 13:45:46.154433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.459 [2024-04-18 13:45:46.154447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.459 [2024-04-18 13:45:46.154462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.459 [2024-04-18 13:45:46.154475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.459 [2024-04-18 13:45:46.154490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626810 is same with the state(5) to be set 00:16:43.459 [2024-04-18 13:45:46.156384] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:16:43.459 [2024-04-18 13:45:46.156419] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:16:43.459 [2024-04-18 13:45:46.156438] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:16:43.459 [2024-04-18 13:45:46.156456] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:16:43.459 [2024-04-18 13:45:46.156472] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:16:43.459 [2024-04-18 13:45:46.156550] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f47b0 (9): Bad file descriptor 00:16:43.459 [2024-04-18 13:45:46.156576] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175a190 (9): Bad file descriptor 00:16:43.459 [2024-04-18 13:45:46.156594] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:16:43.459 [2024-04-18 13:45:46.156607] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:16:43.459 [2024-04-18 13:45:46.156626] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:16:43.459 [2024-04-18 13:45:46.156700] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.459 [2024-04-18 13:45:46.156728] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.459 [2024-04-18 13:45:46.156750] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.459 [2024-04-18 13:45:46.156768] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.459 [2024-04-18 13:45:46.156787] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.459 [2024-04-18 13:45:46.156901] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:16:43.459 task offset: 16384 on job bdev=Nvme6n1 fails 00:16:43.459 00:16:43.459 Latency(us) 00:16:43.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.459 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.459 Job: Nvme1n1 ended in about 0.76 seconds with error 00:16:43.459 Verification LBA range: start 0x0 length 0x400 00:16:43.459 Nvme1n1 : 0.76 168.28 10.52 84.14 0.00 250210.73 21359.88 236123.78 00:16:43.459 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.459 Job: Nvme2n1 ended in about 0.78 seconds with error 00:16:43.459 Verification LBA range: start 0x0 length 0x400 00:16:43.459 Nvme2n1 : 0.78 164.13 10.26 82.07 0.00 250500.49 18155.90 257872.02 00:16:43.459 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.459 Job: Nvme3n1 ended in about 0.78 seconds with error 00:16:43.459 Verification LBA range: start 0x0 length 0x400 00:16:43.459 Nvme3n1 : 0.78 163.45 10.22 81.73 0.00 245462.66 38836.15 236123.78 00:16:43.459 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.459 Job: Nvme4n1 ended in about 0.76 seconds with error 00:16:43.459 Verification LBA range: start 0x0 length 0x400 00:16:43.459 Nvme4n1 : 0.76 169.02 10.56 84.51 0.00 230751.19 19515.16 265639.25 00:16:43.459 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.459 Job: Nvme5n1 ended in about 0.75 seconds with error 00:16:43.459 Verification LBA range: start 0x0 length 0x400 00:16:43.459 Nvme5n1 : 0.75 170.49 10.66 85.25 0.00 222525.95 9320.68 260978.92 00:16:43.459 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.459 Job: Nvme6n1 ended in about 0.75 seconds with error 00:16:43.459 Verification LBA range: start 0x0 length 0x400 00:16:43.459 Nvme6n1 : 0.75 170.89 10.68 85.45 0.00 215743.40 27185.30 250104.79 00:16:43.459 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.459 Job: Nvme7n1 ended in about 0.79 seconds with error 00:16:43.459 Verification LBA range: start 0x0 length 0x400 00:16:43.459 Nvme7n1 : 0.79 162.77 10.17 81.39 0.00 222119.25 16505.36 262532.36 00:16:43.459 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.459 Job: Nvme8n1 ended in about 0.79 seconds with error 00:16:43.459 Verification LBA range: start 0x0 length 0x400 00:16:43.459 Nvme8n1 : 0.79 162.11 10.13 81.05 0.00 217304.18 19806.44 231463.44 00:16:43.459 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.459 Job: Nvme9n1 ended in about 0.79 seconds with error 00:16:43.459 Verification LBA range: start 0x0 length 0x400 00:16:43.459 Nvme9n1 : 0.79 80.72 5.04 80.72 0.00 318853.88 22330.79 301368.51 00:16:43.459 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.459 Job: Nvme10n1 ended in about 0.77 seconds with error 00:16:43.459 Verification LBA range: start 0x0 length 0x400 00:16:43.459 Nvme10n1 : 0.77 82.77 5.17 82.77 0.00 300195.84 33204.91 278066.82 00:16:43.459 =================================================================================================================== 00:16:43.459 Total : 1494.64 93.41 829.06 0.00 242926.89 9320.68 301368.51 00:16:43.459 [2024-04-18 13:45:46.183712] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:43.459 [2024-04-18 13:45:46.183803] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:16:43.459 [2024-04-18 13:45:46.183839] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:43.459 [2024-04-18 13:45:46.184151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.459 [2024-04-18 13:45:46.184350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.459 [2024-04-18 13:45:46.184379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e3870 with addr=10.0.0.2, port=4420 00:16:43.459 [2024-04-18 13:45:46.184400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3870 is same with the state(5) to be set 00:16:43.459 [2024-04-18 13:45:46.184578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.459 [2024-04-18 13:45:46.184772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.459 [2024-04-18 13:45:46.184798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dc2b0 with addr=10.0.0.2, port=4420 00:16:43.459 [2024-04-18 13:45:46.184814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dc2b0 is same with the state(5) to be set 00:16:43.459 [2024-04-18 13:45:46.184957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.459 [2024-04-18 13:45:46.185168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.459 [2024-04-18 13:45:46.185201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1638000 with addr=10.0.0.2, port=4420 00:16:43.459 [2024-04-18 13:45:46.185218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1638000 is same with the state(5) to be set 00:16:43.459 [2024-04-18 13:45:46.185407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.459 [2024-04-18 13:45:46.185652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.459 [2024-04-18 13:45:46.185678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x162d820 with addr=10.0.0.2, port=4420 00:16:43.459 [2024-04-18 13:45:46.185693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162d820 is same with the state(5) to be set 00:16:43.460 [2024-04-18 13:45:46.185892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.460 [2024-04-18 13:45:46.186069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.460 [2024-04-18 13:45:46.186094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d8390 with addr=10.0.0.2, port=4420 00:16:43.460 [2024-04-18 13:45:46.186110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d8390 is same with the state(5) to be set 00:16:43.460 [2024-04-18 13:45:46.186127] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:43.460 [2024-04-18 13:45:46.186141] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:43.460 [2024-04-18 13:45:46.186158] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:43.460 [2024-04-18 13:45:46.186193] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:16:43.460 [2024-04-18 13:45:46.186210] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:16:43.460 [2024-04-18 13:45:46.186224] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:16:43.460 [2024-04-18 13:45:46.187690] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:43.460 [2024-04-18 13:45:46.187715] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:43.460 [2024-04-18 13:45:46.187998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.460 [2024-04-18 13:45:46.188189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.460 [2024-04-18 13:45:46.188216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e30c0 with addr=10.0.0.2, port=4420 00:16:43.460 [2024-04-18 13:45:46.188233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e30c0 is same with the state(5) to be set 00:16:43.460 [2024-04-18 13:45:46.188396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.460 [2024-04-18 13:45:46.188612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.460 [2024-04-18 13:45:46.188637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c85f0 with addr=10.0.0.2, port=4420 00:16:43.460 [2024-04-18 13:45:46.188652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c85f0 is same with the state(5) to be set 00:16:43.460 [2024-04-18 13:45:46.188680] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3870 (9): Bad file descriptor 00:16:43.460 [2024-04-18 13:45:46.188705] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dc2b0 (9): Bad file descriptor 00:16:43.460 [2024-04-18 13:45:46.188723] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1638000 (9): Bad file descriptor 00:16:43.460 [2024-04-18 13:45:46.188742] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162d820 (9): Bad file descriptor 00:16:43.460 [2024-04-18 13:45:46.188777] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d8390 (9): Bad file descriptor 00:16:43.460 [2024-04-18 13:45:46.188852] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.460 [2024-04-18 13:45:46.188877] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.460 [2024-04-18 13:45:46.188897] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.460 [2024-04-18 13:45:46.188916] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.460 [2024-04-18 13:45:46.188935] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.460 [2024-04-18 13:45:46.189042] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e30c0 (9): Bad file descriptor 00:16:43.460 [2024-04-18 13:45:46.189069] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c85f0 (9): Bad file descriptor 00:16:43.460 [2024-04-18 13:45:46.189086] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:16:43.460 [2024-04-18 13:45:46.189099] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:16:43.460 [2024-04-18 13:45:46.189113] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:16:43.460 [2024-04-18 13:45:46.189131] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:16:43.460 [2024-04-18 13:45:46.189146] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:16:43.460 [2024-04-18 13:45:46.189159] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:16:43.460 [2024-04-18 13:45:46.189207] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:43.460 [2024-04-18 13:45:46.189225] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:16:43.460 [2024-04-18 13:45:46.189238] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:43.460 [2024-04-18 13:45:46.189256] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:16:43.460 [2024-04-18 13:45:46.189270] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:16:43.460 [2024-04-18 13:45:46.189283] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:16:43.460 [2024-04-18 13:45:46.189300] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:16:43.460 [2024-04-18 13:45:46.189314] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:16:43.460 [2024-04-18 13:45:46.189327] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:16:43.460 [2024-04-18 13:45:46.189412] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:16:43.460 [2024-04-18 13:45:46.189437] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:16:43.460 [2024-04-18 13:45:46.189454] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:43.460 [2024-04-18 13:45:46.189468] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:43.460 [2024-04-18 13:45:46.189482] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:43.460 [2024-04-18 13:45:46.189493] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:43.460 [2024-04-18 13:45:46.189504] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:43.460 [2024-04-18 13:45:46.189537] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:16:43.460 [2024-04-18 13:45:46.189559] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:16:43.460 [2024-04-18 13:45:46.189574] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:16:43.460 [2024-04-18 13:45:46.189590] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:16:43.460 [2024-04-18 13:45:46.189604] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:16:43.460 [2024-04-18 13:45:46.189618] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:16:43.460 [2024-04-18 13:45:46.189645] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:43.460 [2024-04-18 13:45:46.189671] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:43.460 [2024-04-18 13:45:46.189686] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:43.460 [2024-04-18 13:45:46.189934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.460 [2024-04-18 13:45:46.190126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.460 [2024-04-18 13:45:46.190150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x164e5d0 with addr=10.0.0.2, port=4420 00:16:43.460 [2024-04-18 13:45:46.190166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164e5d0 is same with the state(5) to be set 00:16:43.460 [2024-04-18 13:45:46.190358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.460 [2024-04-18 13:45:46.190476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.460 [2024-04-18 13:45:46.190500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175a190 with addr=10.0.0.2, port=4420 00:16:43.460 [2024-04-18 13:45:46.190516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175a190 is same with the state(5) to be set 00:16:43.460 [2024-04-18 13:45:46.190722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.460 [2024-04-18 13:45:46.191011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.460 [2024-04-18 13:45:46.191035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f47b0 with addr=10.0.0.2, port=4420 00:16:43.460 [2024-04-18 13:45:46.191051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f47b0 is same with the state(5) to be set 00:16:43.460 [2024-04-18 13:45:46.191096] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164e5d0 (9): Bad file descriptor 00:16:43.460 [2024-04-18 13:45:46.191121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175a190 (9): Bad file descriptor 00:16:43.460 [2024-04-18 13:45:46.191140] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f47b0 (9): Bad file descriptor 00:16:43.460 [2024-04-18 13:45:46.191188] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:16:43.460 [2024-04-18 13:45:46.191208] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:16:43.460 [2024-04-18 13:45:46.191222] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:16:43.461 [2024-04-18 13:45:46.191238] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:16:43.461 [2024-04-18 13:45:46.191252] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:16:43.461 [2024-04-18 13:45:46.191264] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:16:43.461 [2024-04-18 13:45:46.191279] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:43.461 [2024-04-18 13:45:46.191293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:43.461 [2024-04-18 13:45:46.191310] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:43.461 [2024-04-18 13:45:46.191346] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:43.461 [2024-04-18 13:45:46.191363] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:43.461 [2024-04-18 13:45:46.191375] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:44.041 13:45:46 -- target/shutdown.sh@136 -- # nvmfpid= 00:16:44.041 13:45:46 -- target/shutdown.sh@139 -- # sleep 1 00:16:44.977 13:45:47 -- target/shutdown.sh@142 -- # kill -9 2628598 00:16:44.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2628598) - No such process 00:16:44.977 13:45:47 -- target/shutdown.sh@142 -- # true 00:16:44.977 13:45:47 -- target/shutdown.sh@144 -- # stoptarget 00:16:44.977 13:45:47 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:44.977 13:45:47 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:44.977 13:45:47 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:44.977 13:45:47 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:44.977 13:45:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:44.977 13:45:47 -- nvmf/common.sh@117 -- # sync 00:16:44.977 13:45:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:44.977 13:45:47 -- nvmf/common.sh@120 -- # set +e 00:16:44.977 13:45:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:44.977 13:45:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:44.977 rmmod nvme_tcp 00:16:44.977 rmmod nvme_fabrics 00:16:44.977 rmmod nvme_keyring 00:16:44.977 13:45:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:44.977 13:45:47 -- nvmf/common.sh@124 -- # set -e 00:16:44.977 13:45:47 -- nvmf/common.sh@125 -- # return 0 00:16:44.977 13:45:47 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:16:44.977 13:45:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:44.977 13:45:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:44.977 13:45:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:44.977 13:45:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.977 13:45:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:44.977 13:45:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.977 13:45:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.977 13:45:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.513 13:45:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:47.513 00:16:47.513 real 0m7.260s 00:16:47.513 user 0m17.173s 00:16:47.513 sys 0m1.373s 00:16:47.513 13:45:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:47.513 13:45:49 -- common/autotest_common.sh@10 -- # set +x 00:16:47.513 ************************************ 00:16:47.513 END TEST nvmf_shutdown_tc3 00:16:47.513 ************************************ 00:16:47.513 13:45:49 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:16:47.513 00:16:47.513 real 0m27.521s 00:16:47.513 user 1m16.750s 00:16:47.514 sys 0m6.365s 00:16:47.514 13:45:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:47.514 13:45:49 -- common/autotest_common.sh@10 -- # set +x 00:16:47.514 ************************************ 00:16:47.514 END TEST nvmf_shutdown 00:16:47.514 ************************************ 00:16:47.514 13:45:49 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:16:47.514 13:45:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:47.514 13:45:49 -- common/autotest_common.sh@10 -- # set +x 00:16:47.514 13:45:49 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:16:47.514 13:45:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:47.514 13:45:49 -- common/autotest_common.sh@10 -- # set +x 00:16:47.514 13:45:49 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:16:47.514 13:45:49 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:47.514 13:45:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:47.514 13:45:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:47.514 13:45:49 -- common/autotest_common.sh@10 -- # set +x 00:16:47.514 ************************************ 00:16:47.514 START TEST nvmf_multicontroller 00:16:47.514 ************************************ 00:16:47.514 13:45:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:47.514 * Looking for test storage... 00:16:47.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:47.514 13:45:50 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:47.514 13:45:50 -- nvmf/common.sh@7 -- # uname -s 00:16:47.514 13:45:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.514 13:45:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.514 13:45:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.514 13:45:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.514 13:45:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.514 13:45:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.514 13:45:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.514 13:45:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.514 13:45:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.514 13:45:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.514 13:45:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:47.514 13:45:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:47.514 13:45:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.514 13:45:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.514 13:45:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:47.514 13:45:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.514 13:45:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:47.514 13:45:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.514 13:45:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.514 13:45:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.514 13:45:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.514 13:45:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.514 13:45:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.514 13:45:50 -- paths/export.sh@5 -- # export PATH 00:16:47.514 13:45:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.514 13:45:50 -- nvmf/common.sh@47 -- # : 0 00:16:47.514 13:45:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:47.514 13:45:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:47.514 13:45:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.514 13:45:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.514 13:45:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.514 13:45:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:47.514 13:45:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:47.514 13:45:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:47.514 13:45:50 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:47.514 13:45:50 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:47.514 13:45:50 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:47.514 13:45:50 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:47.514 13:45:50 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:47.514 13:45:50 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:16:47.514 13:45:50 -- host/multicontroller.sh@23 -- # nvmftestinit 00:16:47.514 13:45:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:47.514 13:45:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.514 13:45:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:47.514 13:45:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:47.514 13:45:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:47.514 13:45:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.514 13:45:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.514 13:45:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.514 13:45:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:47.514 13:45:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:47.514 13:45:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:47.514 13:45:50 -- common/autotest_common.sh@10 -- # set +x 00:16:49.415 13:45:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:49.415 13:45:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:49.415 13:45:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:49.415 13:45:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:49.415 13:45:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:49.415 13:45:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:49.415 13:45:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:49.415 13:45:52 -- nvmf/common.sh@295 -- # net_devs=() 00:16:49.415 13:45:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:49.415 13:45:52 -- nvmf/common.sh@296 -- # e810=() 00:16:49.415 13:45:52 -- nvmf/common.sh@296 -- # local -ga e810 00:16:49.415 13:45:52 -- nvmf/common.sh@297 -- # x722=() 00:16:49.415 13:45:52 -- nvmf/common.sh@297 -- # local -ga x722 00:16:49.415 13:45:52 -- nvmf/common.sh@298 -- # mlx=() 00:16:49.415 13:45:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:49.415 13:45:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:49.415 13:45:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:49.415 13:45:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:49.415 13:45:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:49.415 13:45:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:49.415 13:45:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:49.415 13:45:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:49.415 13:45:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:49.415 13:45:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:49.415 13:45:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:49.415 13:45:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:49.415 13:45:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:49.415 13:45:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:49.415 13:45:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:49.415 13:45:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:49.415 13:45:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:49.415 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:49.415 13:45:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:49.415 13:45:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:49.415 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:49.415 13:45:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:49.415 13:45:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:49.415 13:45:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.415 13:45:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:49.415 13:45:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.415 13:45:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:49.415 Found net devices under 0000:84:00.0: cvl_0_0 00:16:49.415 13:45:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.415 13:45:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:49.415 13:45:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.415 13:45:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:49.415 13:45:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.415 13:45:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:49.415 Found net devices under 0000:84:00.1: cvl_0_1 00:16:49.415 13:45:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.415 13:45:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:49.415 13:45:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:49.415 13:45:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:49.415 13:45:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.415 13:45:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.415 13:45:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:49.415 13:45:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:49.415 13:45:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:49.415 13:45:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:49.415 13:45:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:49.415 13:45:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:49.415 13:45:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.415 13:45:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:49.415 13:45:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:49.415 13:45:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:49.415 13:45:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:49.415 13:45:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:49.415 13:45:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:49.415 13:45:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:49.415 13:45:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:49.415 13:45:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:49.415 13:45:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:49.415 13:45:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:49.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:16:49.415 00:16:49.415 --- 10.0.0.2 ping statistics --- 00:16:49.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.415 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:16:49.415 13:45:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:49.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:16:49.415 00:16:49.415 --- 10.0.0.1 ping statistics --- 00:16:49.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.415 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:16:49.415 13:45:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.415 13:45:52 -- nvmf/common.sh@411 -- # return 0 00:16:49.415 13:45:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:49.415 13:45:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.415 13:45:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:49.415 13:45:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:49.416 13:45:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.416 13:45:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:49.416 13:45:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:49.416 13:45:52 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:16:49.416 13:45:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:49.416 13:45:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:49.416 13:45:52 -- common/autotest_common.sh@10 -- # set +x 00:16:49.416 13:45:52 -- nvmf/common.sh@470 -- # nvmfpid=2631056 00:16:49.416 13:45:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:49.416 13:45:52 -- nvmf/common.sh@471 -- # waitforlisten 2631056 00:16:49.416 13:45:52 -- common/autotest_common.sh@817 -- # '[' -z 2631056 ']' 00:16:49.416 13:45:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.416 13:45:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:49.416 13:45:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.416 13:45:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:49.416 13:45:52 -- common/autotest_common.sh@10 -- # set +x 00:16:49.675 [2024-04-18 13:45:52.260152] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:49.675 [2024-04-18 13:45:52.260262] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.675 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.675 [2024-04-18 13:45:52.329258] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:49.675 [2024-04-18 13:45:52.444120] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.675 [2024-04-18 13:45:52.444197] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.675 [2024-04-18 13:45:52.444223] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.675 [2024-04-18 13:45:52.444237] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.675 [2024-04-18 13:45:52.444249] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.675 [2024-04-18 13:45:52.444341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.675 [2024-04-18 13:45:52.444471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.675 [2024-04-18 13:45:52.444474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.608 13:45:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:50.608 13:45:53 -- common/autotest_common.sh@850 -- # return 0 00:16:50.608 13:45:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:50.608 13:45:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:50.608 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:50.608 13:45:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.608 13:45:53 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:50.608 13:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.608 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:50.608 [2024-04-18 13:45:53.246440] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.608 13:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.608 13:45:53 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:50.608 13:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.608 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:50.608 Malloc0 00:16:50.608 13:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.608 13:45:53 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:50.608 13:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.608 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:50.608 13:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.608 13:45:53 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:50.608 13:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.608 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:50.608 13:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.608 13:45:53 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:50.608 13:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.608 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:50.608 [2024-04-18 13:45:53.307391] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.608 13:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.608 13:45:53 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:50.608 13:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.608 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:50.608 [2024-04-18 13:45:53.315248] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:50.608 13:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.608 13:45:53 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:50.608 13:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.608 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:50.608 Malloc1 00:16:50.608 13:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.608 13:45:53 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:50.608 13:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.608 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:50.608 13:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.608 13:45:53 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:16:50.608 13:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.608 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:50.608 13:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.608 13:45:53 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:50.608 13:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.608 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:50.608 13:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.608 13:45:53 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:16:50.608 13:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.608 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:50.608 13:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.608 13:45:53 -- host/multicontroller.sh@44 -- # bdevperf_pid=2631210 00:16:50.608 13:45:53 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:50.608 13:45:53 -- host/multicontroller.sh@47 -- # waitforlisten 2631210 /var/tmp/bdevperf.sock 00:16:50.608 13:45:53 -- common/autotest_common.sh@817 -- # '[' -z 2631210 ']' 00:16:50.608 13:45:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:50.608 13:45:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:50.608 13:45:53 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:16:50.608 13:45:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:50.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:50.608 13:45:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:50.608 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:51.176 13:45:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:51.176 13:45:53 -- common/autotest_common.sh@850 -- # return 0 00:16:51.176 13:45:53 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:51.176 13:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.176 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:51.176 NVMe0n1 00:16:51.176 13:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.176 13:45:53 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:51.176 13:45:53 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:16:51.176 13:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.176 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:51.176 13:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.176 1 00:16:51.176 13:45:53 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:51.176 13:45:53 -- common/autotest_common.sh@638 -- # local es=0 00:16:51.176 13:45:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:51.176 13:45:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:51.176 13:45:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.176 13:45:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:51.176 13:45:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.176 13:45:53 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:51.176 13:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.176 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:51.176 request: 00:16:51.176 { 00:16:51.176 "name": "NVMe0", 00:16:51.176 "trtype": "tcp", 00:16:51.176 "traddr": "10.0.0.2", 00:16:51.176 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:16:51.176 "hostaddr": "10.0.0.2", 00:16:51.176 "hostsvcid": "60000", 00:16:51.176 "adrfam": "ipv4", 00:16:51.176 "trsvcid": "4420", 00:16:51.176 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.176 "method": "bdev_nvme_attach_controller", 00:16:51.176 "req_id": 1 00:16:51.176 } 00:16:51.176 Got JSON-RPC error response 00:16:51.176 response: 00:16:51.176 { 00:16:51.176 "code": -114, 00:16:51.176 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:16:51.176 } 00:16:51.176 13:45:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:51.176 13:45:53 -- common/autotest_common.sh@641 -- # es=1 00:16:51.176 13:45:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:51.176 13:45:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:51.176 13:45:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:51.176 13:45:53 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:51.176 13:45:53 -- common/autotest_common.sh@638 -- # local es=0 00:16:51.176 13:45:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:51.176 13:45:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:51.176 13:45:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.176 13:45:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:51.435 13:45:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.435 13:45:53 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:51.435 13:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.435 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:51.435 request: 00:16:51.435 { 00:16:51.435 "name": "NVMe0", 00:16:51.435 "trtype": "tcp", 00:16:51.435 "traddr": "10.0.0.2", 00:16:51.435 "hostaddr": "10.0.0.2", 00:16:51.435 "hostsvcid": "60000", 00:16:51.435 "adrfam": "ipv4", 00:16:51.435 "trsvcid": "4420", 00:16:51.435 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:51.435 "method": "bdev_nvme_attach_controller", 00:16:51.436 "req_id": 1 00:16:51.436 } 00:16:51.436 Got JSON-RPC error response 00:16:51.436 response: 00:16:51.436 { 00:16:51.436 "code": -114, 00:16:51.436 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:16:51.436 } 00:16:51.436 13:45:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:51.436 13:45:53 -- common/autotest_common.sh@641 -- # es=1 00:16:51.436 13:45:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:51.436 13:45:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:51.436 13:45:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:51.436 13:45:53 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:51.436 13:45:53 -- common/autotest_common.sh@638 -- # local es=0 00:16:51.436 13:45:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:51.436 13:45:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:51.436 13:45:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.436 13:45:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:51.436 13:45:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.436 13:45:53 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:51.436 13:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.436 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:16:51.436 request: 00:16:51.436 { 00:16:51.436 "name": "NVMe0", 00:16:51.436 "trtype": "tcp", 00:16:51.436 "traddr": "10.0.0.2", 00:16:51.436 "hostaddr": "10.0.0.2", 00:16:51.436 "hostsvcid": "60000", 00:16:51.436 "adrfam": "ipv4", 00:16:51.436 "trsvcid": "4420", 00:16:51.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.436 "multipath": "disable", 00:16:51.436 "method": "bdev_nvme_attach_controller", 00:16:51.436 "req_id": 1 00:16:51.436 } 00:16:51.436 Got JSON-RPC error response 00:16:51.436 response: 00:16:51.436 { 00:16:51.436 "code": -114, 00:16:51.436 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:16:51.436 } 00:16:51.436 13:45:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:51.436 13:45:54 -- common/autotest_common.sh@641 -- # es=1 00:16:51.436 13:45:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:51.436 13:45:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:51.436 13:45:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:51.436 13:45:54 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:51.436 13:45:54 -- common/autotest_common.sh@638 -- # local es=0 00:16:51.436 13:45:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:51.436 13:45:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:51.436 13:45:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.436 13:45:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:51.436 13:45:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.436 13:45:54 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:51.436 13:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.436 13:45:54 -- common/autotest_common.sh@10 -- # set +x 00:16:51.436 request: 00:16:51.436 { 00:16:51.436 "name": "NVMe0", 00:16:51.436 "trtype": "tcp", 00:16:51.436 "traddr": "10.0.0.2", 00:16:51.436 "hostaddr": "10.0.0.2", 00:16:51.436 "hostsvcid": "60000", 00:16:51.436 "adrfam": "ipv4", 00:16:51.436 "trsvcid": "4420", 00:16:51.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.436 "multipath": "failover", 00:16:51.436 "method": "bdev_nvme_attach_controller", 00:16:51.436 "req_id": 1 00:16:51.436 } 00:16:51.436 Got JSON-RPC error response 00:16:51.436 response: 00:16:51.436 { 00:16:51.436 "code": -114, 00:16:51.436 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:16:51.436 } 00:16:51.436 13:45:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:51.436 13:45:54 -- common/autotest_common.sh@641 -- # es=1 00:16:51.436 13:45:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:51.436 13:45:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:51.436 13:45:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:51.436 13:45:54 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:51.436 13:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.436 13:45:54 -- common/autotest_common.sh@10 -- # set +x 00:16:51.695 00:16:51.695 13:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.695 13:45:54 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:51.695 13:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.695 13:45:54 -- common/autotest_common.sh@10 -- # set +x 00:16:51.695 13:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.695 13:45:54 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:51.695 13:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.695 13:45:54 -- common/autotest_common.sh@10 -- # set +x 00:16:51.695 00:16:51.695 13:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.695 13:45:54 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:51.695 13:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.695 13:45:54 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:16:51.695 13:45:54 -- common/autotest_common.sh@10 -- # set +x 00:16:51.695 13:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.695 13:45:54 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:16:51.695 13:45:54 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:53.072 0 00:16:53.072 13:45:55 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:16:53.072 13:45:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.072 13:45:55 -- common/autotest_common.sh@10 -- # set +x 00:16:53.072 13:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.072 13:45:55 -- host/multicontroller.sh@100 -- # killprocess 2631210 00:16:53.072 13:45:55 -- common/autotest_common.sh@936 -- # '[' -z 2631210 ']' 00:16:53.072 13:45:55 -- common/autotest_common.sh@940 -- # kill -0 2631210 00:16:53.072 13:45:55 -- common/autotest_common.sh@941 -- # uname 00:16:53.072 13:45:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.072 13:45:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2631210 00:16:53.072 13:45:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:53.072 13:45:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:53.072 13:45:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2631210' 00:16:53.072 killing process with pid 2631210 00:16:53.072 13:45:55 -- common/autotest_common.sh@955 -- # kill 2631210 00:16:53.072 13:45:55 -- common/autotest_common.sh@960 -- # wait 2631210 00:16:53.072 13:45:55 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.072 13:45:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.072 13:45:55 -- common/autotest_common.sh@10 -- # set +x 00:16:53.072 13:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.072 13:45:55 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:53.072 13:45:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.072 13:45:55 -- common/autotest_common.sh@10 -- # set +x 00:16:53.072 13:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.072 13:45:55 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:16:53.072 13:45:55 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:16:53.072 13:45:55 -- common/autotest_common.sh@1598 -- # read -r file 00:16:53.072 13:45:55 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:16:53.072 13:45:55 -- common/autotest_common.sh@1597 -- # sort -u 00:16:53.072 13:45:55 -- common/autotest_common.sh@1599 -- # cat 00:16:53.072 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:16:53.072 [2024-04-18 13:45:53.420648] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:53.072 [2024-04-18 13:45:53.420744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631210 ] 00:16:53.072 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.072 [2024-04-18 13:45:53.482274] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.072 [2024-04-18 13:45:53.592655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.072 [2024-04-18 13:45:54.368343] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 69857cdb-1127-414b-a61b-86d82cce0ba8 already exists 00:16:53.072 [2024-04-18 13:45:54.368385] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:69857cdb-1127-414b-a61b-86d82cce0ba8 alias for bdev NVMe1n1 00:16:53.072 [2024-04-18 13:45:54.368402] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:16:53.072 Running I/O for 1 seconds... 00:16:53.072 00:16:53.072 Latency(us) 00:16:53.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.072 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:16:53.072 NVMe0n1 : 1.00 18756.25 73.27 0.00 0.00 6813.51 2038.90 12913.02 00:16:53.072 =================================================================================================================== 00:16:53.072 Total : 18756.25 73.27 0.00 0.00 6813.51 2038.90 12913.02 00:16:53.072 Received shutdown signal, test time was about 1.000000 seconds 00:16:53.072 00:16:53.072 Latency(us) 00:16:53.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.072 =================================================================================================================== 00:16:53.072 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.072 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:16:53.072 13:45:55 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:16:53.072 13:45:55 -- common/autotest_common.sh@1598 -- # read -r file 00:16:53.072 13:45:55 -- host/multicontroller.sh@108 -- # nvmftestfini 00:16:53.072 13:45:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:53.072 13:45:55 -- nvmf/common.sh@117 -- # sync 00:16:53.072 13:45:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:53.072 13:45:55 -- nvmf/common.sh@120 -- # set +e 00:16:53.072 13:45:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:53.072 13:45:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:53.072 rmmod nvme_tcp 00:16:53.072 rmmod nvme_fabrics 00:16:53.331 rmmod nvme_keyring 00:16:53.331 13:45:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:53.331 13:45:55 -- nvmf/common.sh@124 -- # set -e 00:16:53.331 13:45:55 -- nvmf/common.sh@125 -- # return 0 00:16:53.331 13:45:55 -- nvmf/common.sh@478 -- # '[' -n 2631056 ']' 00:16:53.331 13:45:55 -- nvmf/common.sh@479 -- # killprocess 2631056 00:16:53.331 13:45:55 -- common/autotest_common.sh@936 -- # '[' -z 2631056 ']' 00:16:53.331 13:45:55 -- common/autotest_common.sh@940 -- # kill -0 2631056 00:16:53.331 13:45:55 -- common/autotest_common.sh@941 -- # uname 00:16:53.331 13:45:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.331 13:45:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2631056 00:16:53.331 13:45:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:53.331 13:45:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:53.331 13:45:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2631056' 00:16:53.331 killing process with pid 2631056 00:16:53.331 13:45:55 -- common/autotest_common.sh@955 -- # kill 2631056 00:16:53.331 13:45:55 -- common/autotest_common.sh@960 -- # wait 2631056 00:16:53.589 13:45:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:53.589 13:45:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:53.589 13:45:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:53.589 13:45:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.589 13:45:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:53.589 13:45:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.589 13:45:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.589 13:45:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.157 13:45:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.157 00:16:56.157 real 0m8.356s 00:16:56.157 user 0m14.659s 00:16:56.157 sys 0m2.361s 00:16:56.157 13:45:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:56.157 13:45:58 -- common/autotest_common.sh@10 -- # set +x 00:16:56.157 ************************************ 00:16:56.157 END TEST nvmf_multicontroller 00:16:56.157 ************************************ 00:16:56.157 13:45:58 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:56.157 13:45:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:56.157 13:45:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:56.157 13:45:58 -- common/autotest_common.sh@10 -- # set +x 00:16:56.157 ************************************ 00:16:56.157 START TEST nvmf_aer 00:16:56.157 ************************************ 00:16:56.157 13:45:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:56.157 * Looking for test storage... 00:16:56.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:56.157 13:45:58 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.157 13:45:58 -- nvmf/common.sh@7 -- # uname -s 00:16:56.157 13:45:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.157 13:45:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.157 13:45:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.157 13:45:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.157 13:45:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.157 13:45:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.157 13:45:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.157 13:45:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.157 13:45:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.157 13:45:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.157 13:45:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:56.157 13:45:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:56.157 13:45:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.157 13:45:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.157 13:45:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.157 13:45:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.157 13:45:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.157 13:45:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.157 13:45:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.157 13:45:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.157 13:45:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.157 13:45:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.157 13:45:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.157 13:45:58 -- paths/export.sh@5 -- # export PATH 00:16:56.157 13:45:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.157 13:45:58 -- nvmf/common.sh@47 -- # : 0 00:16:56.157 13:45:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.157 13:45:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.157 13:45:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.157 13:45:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.157 13:45:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.157 13:45:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.157 13:45:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.157 13:45:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.157 13:45:58 -- host/aer.sh@11 -- # nvmftestinit 00:16:56.158 13:45:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:56.158 13:45:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.158 13:45:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:56.158 13:45:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:56.158 13:45:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:56.158 13:45:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.158 13:45:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.158 13:45:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.158 13:45:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:56.158 13:45:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:56.158 13:45:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:56.158 13:45:58 -- common/autotest_common.sh@10 -- # set +x 00:16:58.059 13:46:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:58.059 13:46:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:58.059 13:46:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:58.059 13:46:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:58.059 13:46:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:58.059 13:46:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:58.059 13:46:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:58.059 13:46:00 -- nvmf/common.sh@295 -- # net_devs=() 00:16:58.059 13:46:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:58.059 13:46:00 -- nvmf/common.sh@296 -- # e810=() 00:16:58.059 13:46:00 -- nvmf/common.sh@296 -- # local -ga e810 00:16:58.059 13:46:00 -- nvmf/common.sh@297 -- # x722=() 00:16:58.059 13:46:00 -- nvmf/common.sh@297 -- # local -ga x722 00:16:58.059 13:46:00 -- nvmf/common.sh@298 -- # mlx=() 00:16:58.059 13:46:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:58.059 13:46:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.059 13:46:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.059 13:46:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.059 13:46:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.059 13:46:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.059 13:46:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.059 13:46:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.059 13:46:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.059 13:46:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.059 13:46:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.059 13:46:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.059 13:46:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:58.059 13:46:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:58.059 13:46:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:58.059 13:46:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:58.059 13:46:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:58.059 13:46:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:58.059 13:46:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.059 13:46:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:58.060 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:58.060 13:46:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.060 13:46:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.060 13:46:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.060 13:46:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.060 13:46:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.060 13:46:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.060 13:46:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:58.060 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:58.060 13:46:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.060 13:46:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.060 13:46:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.060 13:46:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.060 13:46:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.060 13:46:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:58.060 13:46:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:58.060 13:46:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:58.060 13:46:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.060 13:46:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.060 13:46:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:58.060 13:46:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.060 13:46:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:58.060 Found net devices under 0000:84:00.0: cvl_0_0 00:16:58.060 13:46:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.060 13:46:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.060 13:46:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.060 13:46:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:58.060 13:46:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.060 13:46:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:58.060 Found net devices under 0000:84:00.1: cvl_0_1 00:16:58.060 13:46:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.060 13:46:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:58.060 13:46:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:58.060 13:46:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:58.060 13:46:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:58.060 13:46:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:58.060 13:46:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.060 13:46:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.060 13:46:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.060 13:46:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:58.060 13:46:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.060 13:46:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.060 13:46:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:58.060 13:46:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.060 13:46:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.060 13:46:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:58.060 13:46:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:58.060 13:46:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.060 13:46:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.060 13:46:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.060 13:46:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.060 13:46:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:58.060 13:46:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.060 13:46:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.060 13:46:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.060 13:46:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:58.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:16:58.060 00:16:58.060 --- 10.0.0.2 ping statistics --- 00:16:58.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.060 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:16:58.060 13:46:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:16:58.060 00:16:58.060 --- 10.0.0.1 ping statistics --- 00:16:58.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.060 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:16:58.060 13:46:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.060 13:46:00 -- nvmf/common.sh@411 -- # return 0 00:16:58.060 13:46:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:58.060 13:46:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.060 13:46:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:58.060 13:46:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:58.060 13:46:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.060 13:46:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:58.060 13:46:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:58.060 13:46:00 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:58.060 13:46:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:58.060 13:46:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:58.060 13:46:00 -- common/autotest_common.sh@10 -- # set +x 00:16:58.060 13:46:00 -- nvmf/common.sh@470 -- # nvmfpid=2633458 00:16:58.060 13:46:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:58.060 13:46:00 -- nvmf/common.sh@471 -- # waitforlisten 2633458 00:16:58.060 13:46:00 -- common/autotest_common.sh@817 -- # '[' -z 2633458 ']' 00:16:58.060 13:46:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.060 13:46:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:58.060 13:46:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.060 13:46:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:58.060 13:46:00 -- common/autotest_common.sh@10 -- # set +x 00:16:58.060 [2024-04-18 13:46:00.679403] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:16:58.060 [2024-04-18 13:46:00.679506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.060 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.060 [2024-04-18 13:46:00.753056] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.318 [2024-04-18 13:46:00.871628] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.318 [2024-04-18 13:46:00.871692] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.318 [2024-04-18 13:46:00.871709] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.318 [2024-04-18 13:46:00.871723] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.318 [2024-04-18 13:46:00.871735] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.318 [2024-04-18 13:46:00.871827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.318 [2024-04-18 13:46:00.871896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.318 [2024-04-18 13:46:00.871988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.318 [2024-04-18 13:46:00.871991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.916 13:46:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:58.916 13:46:01 -- common/autotest_common.sh@850 -- # return 0 00:16:58.916 13:46:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:58.916 13:46:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:58.916 13:46:01 -- common/autotest_common.sh@10 -- # set +x 00:16:58.916 13:46:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.916 13:46:01 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:58.916 13:46:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.916 13:46:01 -- common/autotest_common.sh@10 -- # set +x 00:16:58.916 [2024-04-18 13:46:01.656141] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.916 13:46:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.916 13:46:01 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:58.916 13:46:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.916 13:46:01 -- common/autotest_common.sh@10 -- # set +x 00:16:58.916 Malloc0 00:16:58.916 13:46:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.916 13:46:01 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:58.916 13:46:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.916 13:46:01 -- common/autotest_common.sh@10 -- # set +x 00:16:58.916 13:46:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.916 13:46:01 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:58.916 13:46:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.916 13:46:01 -- common/autotest_common.sh@10 -- # set +x 00:16:58.916 13:46:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.916 13:46:01 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:58.916 13:46:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.916 13:46:01 -- common/autotest_common.sh@10 -- # set +x 00:16:58.916 [2024-04-18 13:46:01.707240] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.916 13:46:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.916 13:46:01 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:58.916 13:46:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.916 13:46:01 -- common/autotest_common.sh@10 -- # set +x 00:16:58.916 [2024-04-18 13:46:01.714968] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:58.916 [ 00:16:58.916 { 00:16:58.916 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:58.916 "subtype": "Discovery", 00:16:58.916 "listen_addresses": [], 00:16:58.916 "allow_any_host": true, 00:16:58.916 "hosts": [] 00:16:58.916 }, 00:16:58.916 { 00:16:58.916 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.916 "subtype": "NVMe", 00:16:58.916 "listen_addresses": [ 00:16:58.916 { 00:16:58.916 "transport": "TCP", 00:16:58.916 "trtype": "TCP", 00:16:58.916 "adrfam": "IPv4", 00:16:58.916 "traddr": "10.0.0.2", 00:16:58.916 "trsvcid": "4420" 00:16:58.916 } 00:16:58.916 ], 00:16:58.916 "allow_any_host": true, 00:16:58.916 "hosts": [], 00:16:58.916 "serial_number": "SPDK00000000000001", 00:16:58.916 "model_number": "SPDK bdev Controller", 00:16:58.916 "max_namespaces": 2, 00:16:58.916 "min_cntlid": 1, 00:16:58.916 "max_cntlid": 65519, 00:16:58.916 "namespaces": [ 00:16:58.916 { 00:16:58.916 "nsid": 1, 00:16:58.916 "bdev_name": "Malloc0", 00:16:58.916 "name": "Malloc0", 00:16:58.916 "nguid": "090BFE50ADC549A7B03219BF16993230", 00:16:58.916 "uuid": "090bfe50-adc5-49a7-b032-19bf16993230" 00:16:58.916 } 00:16:58.916 ] 00:16:58.916 } 00:16:58.916 ] 00:16:58.917 13:46:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.917 13:46:01 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:58.917 13:46:01 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:59.174 13:46:01 -- host/aer.sh@33 -- # aerpid=2633606 00:16:59.174 13:46:01 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:59.174 13:46:01 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:59.174 13:46:01 -- common/autotest_common.sh@1251 -- # local i=0 00:16:59.174 13:46:01 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:59.174 13:46:01 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:16:59.174 13:46:01 -- common/autotest_common.sh@1254 -- # i=1 00:16:59.174 13:46:01 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:16:59.174 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.174 13:46:01 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:59.174 13:46:01 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:16:59.174 13:46:01 -- common/autotest_common.sh@1254 -- # i=2 00:16:59.174 13:46:01 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:16:59.174 13:46:01 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:59.174 13:46:01 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:59.174 13:46:01 -- common/autotest_common.sh@1262 -- # return 0 00:16:59.174 13:46:01 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:59.174 13:46:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.174 13:46:01 -- common/autotest_common.sh@10 -- # set +x 00:16:59.174 Malloc1 00:16:59.174 13:46:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.174 13:46:01 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:59.175 13:46:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.175 13:46:01 -- common/autotest_common.sh@10 -- # set +x 00:16:59.175 13:46:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.175 13:46:01 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:59.175 13:46:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.431 13:46:01 -- common/autotest_common.sh@10 -- # set +x 00:16:59.431 Asynchronous Event Request test 00:16:59.431 Attaching to 10.0.0.2 00:16:59.431 Attached to 10.0.0.2 00:16:59.431 Registering asynchronous event callbacks... 00:16:59.431 Starting namespace attribute notice tests for all controllers... 00:16:59.431 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:59.431 aer_cb - Changed Namespace 00:16:59.431 Cleaning up... 00:16:59.431 [ 00:16:59.431 { 00:16:59.431 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:59.431 "subtype": "Discovery", 00:16:59.431 "listen_addresses": [], 00:16:59.431 "allow_any_host": true, 00:16:59.431 "hosts": [] 00:16:59.431 }, 00:16:59.431 { 00:16:59.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.431 "subtype": "NVMe", 00:16:59.431 "listen_addresses": [ 00:16:59.431 { 00:16:59.431 "transport": "TCP", 00:16:59.431 "trtype": "TCP", 00:16:59.431 "adrfam": "IPv4", 00:16:59.431 "traddr": "10.0.0.2", 00:16:59.431 "trsvcid": "4420" 00:16:59.431 } 00:16:59.431 ], 00:16:59.431 "allow_any_host": true, 00:16:59.431 "hosts": [], 00:16:59.431 "serial_number": "SPDK00000000000001", 00:16:59.431 "model_number": "SPDK bdev Controller", 00:16:59.431 "max_namespaces": 2, 00:16:59.431 "min_cntlid": 1, 00:16:59.431 "max_cntlid": 65519, 00:16:59.431 "namespaces": [ 00:16:59.431 { 00:16:59.431 "nsid": 1, 00:16:59.431 "bdev_name": "Malloc0", 00:16:59.431 "name": "Malloc0", 00:16:59.431 "nguid": "090BFE50ADC549A7B03219BF16993230", 00:16:59.431 "uuid": "090bfe50-adc5-49a7-b032-19bf16993230" 00:16:59.431 }, 00:16:59.431 { 00:16:59.431 "nsid": 2, 00:16:59.431 "bdev_name": "Malloc1", 00:16:59.431 "name": "Malloc1", 00:16:59.431 "nguid": "1A2C7727F96842599D8254AE002A7F23", 00:16:59.431 "uuid": "1a2c7727-f968-4259-9d82-54ae002a7f23" 00:16:59.431 } 00:16:59.431 ] 00:16:59.431 } 00:16:59.431 ] 00:16:59.431 13:46:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.431 13:46:01 -- host/aer.sh@43 -- # wait 2633606 00:16:59.431 13:46:02 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:59.431 13:46:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.431 13:46:02 -- common/autotest_common.sh@10 -- # set +x 00:16:59.431 13:46:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.431 13:46:02 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:59.431 13:46:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.431 13:46:02 -- common/autotest_common.sh@10 -- # set +x 00:16:59.431 13:46:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.431 13:46:02 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:59.431 13:46:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.431 13:46:02 -- common/autotest_common.sh@10 -- # set +x 00:16:59.431 13:46:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.431 13:46:02 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:59.431 13:46:02 -- host/aer.sh@51 -- # nvmftestfini 00:16:59.431 13:46:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:59.431 13:46:02 -- nvmf/common.sh@117 -- # sync 00:16:59.431 13:46:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:59.431 13:46:02 -- nvmf/common.sh@120 -- # set +e 00:16:59.431 13:46:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:59.431 13:46:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:59.431 rmmod nvme_tcp 00:16:59.431 rmmod nvme_fabrics 00:16:59.431 rmmod nvme_keyring 00:16:59.431 13:46:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:59.431 13:46:02 -- nvmf/common.sh@124 -- # set -e 00:16:59.431 13:46:02 -- nvmf/common.sh@125 -- # return 0 00:16:59.431 13:46:02 -- nvmf/common.sh@478 -- # '[' -n 2633458 ']' 00:16:59.431 13:46:02 -- nvmf/common.sh@479 -- # killprocess 2633458 00:16:59.431 13:46:02 -- common/autotest_common.sh@936 -- # '[' -z 2633458 ']' 00:16:59.431 13:46:02 -- common/autotest_common.sh@940 -- # kill -0 2633458 00:16:59.431 13:46:02 -- common/autotest_common.sh@941 -- # uname 00:16:59.431 13:46:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:59.431 13:46:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2633458 00:16:59.431 13:46:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:59.431 13:46:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:59.431 13:46:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2633458' 00:16:59.431 killing process with pid 2633458 00:16:59.431 13:46:02 -- common/autotest_common.sh@955 -- # kill 2633458 00:16:59.431 [2024-04-18 13:46:02.141450] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:59.431 13:46:02 -- common/autotest_common.sh@960 -- # wait 2633458 00:16:59.690 13:46:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:59.690 13:46:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:59.690 13:46:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:59.690 13:46:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:59.690 13:46:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:59.690 13:46:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.690 13:46:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.690 13:46:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.223 13:46:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:02.223 00:17:02.223 real 0m6.034s 00:17:02.223 user 0m6.935s 00:17:02.223 sys 0m1.898s 00:17:02.223 13:46:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:02.223 13:46:04 -- common/autotest_common.sh@10 -- # set +x 00:17:02.223 ************************************ 00:17:02.223 END TEST nvmf_aer 00:17:02.223 ************************************ 00:17:02.223 13:46:04 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:02.223 13:46:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:02.223 13:46:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:02.223 13:46:04 -- common/autotest_common.sh@10 -- # set +x 00:17:02.223 ************************************ 00:17:02.223 START TEST nvmf_async_init 00:17:02.223 ************************************ 00:17:02.223 13:46:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:02.223 * Looking for test storage... 00:17:02.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:02.223 13:46:04 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.223 13:46:04 -- nvmf/common.sh@7 -- # uname -s 00:17:02.223 13:46:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.223 13:46:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.223 13:46:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.223 13:46:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.223 13:46:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.223 13:46:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.223 13:46:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.223 13:46:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.223 13:46:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.223 13:46:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.223 13:46:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:02.223 13:46:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:02.223 13:46:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.223 13:46:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.223 13:46:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.223 13:46:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.223 13:46:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.223 13:46:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.223 13:46:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.223 13:46:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.223 13:46:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.223 13:46:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.223 13:46:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.223 13:46:04 -- paths/export.sh@5 -- # export PATH 00:17:02.223 13:46:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.223 13:46:04 -- nvmf/common.sh@47 -- # : 0 00:17:02.223 13:46:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:02.223 13:46:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:02.223 13:46:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.223 13:46:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.223 13:46:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.223 13:46:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:02.223 13:46:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:02.223 13:46:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:02.223 13:46:04 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:02.223 13:46:04 -- host/async_init.sh@14 -- # null_block_size=512 00:17:02.223 13:46:04 -- host/async_init.sh@15 -- # null_bdev=null0 00:17:02.223 13:46:04 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:02.223 13:46:04 -- host/async_init.sh@20 -- # uuidgen 00:17:02.223 13:46:04 -- host/async_init.sh@20 -- # tr -d - 00:17:02.223 13:46:04 -- host/async_init.sh@20 -- # nguid=659f11a5e34b44ecb70d36ed53606ae9 00:17:02.223 13:46:04 -- host/async_init.sh@22 -- # nvmftestinit 00:17:02.223 13:46:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:02.223 13:46:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.223 13:46:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:02.223 13:46:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:02.223 13:46:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:02.223 13:46:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.223 13:46:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.223 13:46:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.223 13:46:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:02.223 13:46:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:02.223 13:46:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:02.223 13:46:04 -- common/autotest_common.sh@10 -- # set +x 00:17:04.125 13:46:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:04.125 13:46:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:04.125 13:46:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:04.125 13:46:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:04.125 13:46:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:04.125 13:46:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:04.125 13:46:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:04.125 13:46:06 -- nvmf/common.sh@295 -- # net_devs=() 00:17:04.125 13:46:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:04.125 13:46:06 -- nvmf/common.sh@296 -- # e810=() 00:17:04.125 13:46:06 -- nvmf/common.sh@296 -- # local -ga e810 00:17:04.125 13:46:06 -- nvmf/common.sh@297 -- # x722=() 00:17:04.125 13:46:06 -- nvmf/common.sh@297 -- # local -ga x722 00:17:04.125 13:46:06 -- nvmf/common.sh@298 -- # mlx=() 00:17:04.125 13:46:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:04.125 13:46:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.125 13:46:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.125 13:46:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.125 13:46:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.125 13:46:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.125 13:46:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.125 13:46:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.125 13:46:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.125 13:46:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.125 13:46:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.125 13:46:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.125 13:46:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:04.125 13:46:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:04.125 13:46:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:04.125 13:46:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:04.125 13:46:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:04.125 13:46:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:04.125 13:46:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.125 13:46:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:04.125 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:04.125 13:46:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.125 13:46:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.125 13:46:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.125 13:46:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.125 13:46:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.126 13:46:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.126 13:46:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:04.126 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:04.126 13:46:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.126 13:46:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.126 13:46:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.126 13:46:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.126 13:46:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.126 13:46:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:04.126 13:46:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:04.126 13:46:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:04.126 13:46:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.126 13:46:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.126 13:46:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:04.126 13:46:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.126 13:46:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:04.126 Found net devices under 0000:84:00.0: cvl_0_0 00:17:04.126 13:46:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.126 13:46:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.126 13:46:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.126 13:46:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:04.126 13:46:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.126 13:46:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:04.126 Found net devices under 0000:84:00.1: cvl_0_1 00:17:04.126 13:46:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.126 13:46:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:04.126 13:46:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:04.126 13:46:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:04.126 13:46:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:04.126 13:46:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:04.126 13:46:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.126 13:46:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.126 13:46:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:04.126 13:46:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:04.126 13:46:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:04.126 13:46:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:04.126 13:46:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:04.126 13:46:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:04.126 13:46:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.126 13:46:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:04.126 13:46:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:04.126 13:46:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:04.126 13:46:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:04.126 13:46:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:04.126 13:46:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:04.126 13:46:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:04.126 13:46:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:04.126 13:46:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:04.126 13:46:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:04.126 13:46:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:04.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:17:04.126 00:17:04.126 --- 10.0.0.2 ping statistics --- 00:17:04.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.126 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:17:04.126 13:46:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:04.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:17:04.126 00:17:04.126 --- 10.0.0.1 ping statistics --- 00:17:04.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.126 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:04.126 13:46:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.126 13:46:06 -- nvmf/common.sh@411 -- # return 0 00:17:04.126 13:46:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:04.126 13:46:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.126 13:46:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:04.126 13:46:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:04.126 13:46:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.126 13:46:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:04.126 13:46:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:04.126 13:46:06 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:04.126 13:46:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:04.126 13:46:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:04.126 13:46:06 -- common/autotest_common.sh@10 -- # set +x 00:17:04.126 13:46:06 -- nvmf/common.sh@470 -- # nvmfpid=2635680 00:17:04.126 13:46:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:04.126 13:46:06 -- nvmf/common.sh@471 -- # waitforlisten 2635680 00:17:04.126 13:46:06 -- common/autotest_common.sh@817 -- # '[' -z 2635680 ']' 00:17:04.126 13:46:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.126 13:46:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:04.126 13:46:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.126 13:46:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:04.126 13:46:06 -- common/autotest_common.sh@10 -- # set +x 00:17:04.126 [2024-04-18 13:46:06.918088] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:04.126 [2024-04-18 13:46:06.918174] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.385 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.385 [2024-04-18 13:46:06.983858] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.385 [2024-04-18 13:46:07.090740] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.385 [2024-04-18 13:46:07.090810] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.385 [2024-04-18 13:46:07.090823] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.385 [2024-04-18 13:46:07.090834] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.385 [2024-04-18 13:46:07.090844] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.385 [2024-04-18 13:46:07.090872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.643 13:46:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:04.643 13:46:07 -- common/autotest_common.sh@850 -- # return 0 00:17:04.643 13:46:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:04.643 13:46:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:04.643 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:04.643 13:46:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.643 13:46:07 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:04.643 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.643 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:04.643 [2024-04-18 13:46:07.243332] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.643 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.643 13:46:07 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:04.643 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.643 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:04.643 null0 00:17:04.643 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.643 13:46:07 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:04.643 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.643 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:04.643 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.643 13:46:07 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:04.643 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.643 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:04.643 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.643 13:46:07 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 659f11a5e34b44ecb70d36ed53606ae9 00:17:04.643 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.643 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:04.643 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.643 13:46:07 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:04.643 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.643 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:04.643 [2024-04-18 13:46:07.283625] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.643 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.643 13:46:07 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:04.643 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.643 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:04.900 nvme0n1 00:17:04.900 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.900 13:46:07 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:04.900 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.900 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:04.900 [ 00:17:04.900 { 00:17:04.900 "name": "nvme0n1", 00:17:04.900 "aliases": [ 00:17:04.900 "659f11a5-e34b-44ec-b70d-36ed53606ae9" 00:17:04.900 ], 00:17:04.900 "product_name": "NVMe disk", 00:17:04.900 "block_size": 512, 00:17:04.900 "num_blocks": 2097152, 00:17:04.900 "uuid": "659f11a5-e34b-44ec-b70d-36ed53606ae9", 00:17:04.900 "assigned_rate_limits": { 00:17:04.900 "rw_ios_per_sec": 0, 00:17:04.900 "rw_mbytes_per_sec": 0, 00:17:04.900 "r_mbytes_per_sec": 0, 00:17:04.900 "w_mbytes_per_sec": 0 00:17:04.900 }, 00:17:04.900 "claimed": false, 00:17:04.900 "zoned": false, 00:17:04.900 "supported_io_types": { 00:17:04.900 "read": true, 00:17:04.900 "write": true, 00:17:04.900 "unmap": false, 00:17:04.900 "write_zeroes": true, 00:17:04.900 "flush": true, 00:17:04.900 "reset": true, 00:17:04.900 "compare": true, 00:17:04.900 "compare_and_write": true, 00:17:04.900 "abort": true, 00:17:04.900 "nvme_admin": true, 00:17:04.900 "nvme_io": true 00:17:04.900 }, 00:17:04.900 "memory_domains": [ 00:17:04.900 { 00:17:04.900 "dma_device_id": "system", 00:17:04.900 "dma_device_type": 1 00:17:04.900 } 00:17:04.900 ], 00:17:04.900 "driver_specific": { 00:17:04.900 "nvme": [ 00:17:04.900 { 00:17:04.900 "trid": { 00:17:04.900 "trtype": "TCP", 00:17:04.900 "adrfam": "IPv4", 00:17:04.900 "traddr": "10.0.0.2", 00:17:04.900 "trsvcid": "4420", 00:17:04.900 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:04.900 }, 00:17:04.900 "ctrlr_data": { 00:17:04.900 "cntlid": 1, 00:17:04.900 "vendor_id": "0x8086", 00:17:04.900 "model_number": "SPDK bdev Controller", 00:17:04.900 "serial_number": "00000000000000000000", 00:17:04.900 "firmware_revision": "24.05", 00:17:04.900 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:04.900 "oacs": { 00:17:04.900 "security": 0, 00:17:04.900 "format": 0, 00:17:04.900 "firmware": 0, 00:17:04.900 "ns_manage": 0 00:17:04.900 }, 00:17:04.900 "multi_ctrlr": true, 00:17:04.900 "ana_reporting": false 00:17:04.900 }, 00:17:04.900 "vs": { 00:17:04.900 "nvme_version": "1.3" 00:17:04.900 }, 00:17:04.900 "ns_data": { 00:17:04.900 "id": 1, 00:17:04.900 "can_share": true 00:17:04.900 } 00:17:04.900 } 00:17:04.900 ], 00:17:04.900 "mp_policy": "active_passive" 00:17:04.900 } 00:17:04.900 } 00:17:04.900 ] 00:17:04.900 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.900 13:46:07 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:04.900 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.900 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:04.900 [2024-04-18 13:46:07.536281] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:04.900 [2024-04-18 13:46:07.536364] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f3eb0 (9): Bad file descriptor 00:17:04.900 [2024-04-18 13:46:07.678327] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:04.900 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.900 13:46:07 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:04.900 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.900 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:04.900 [ 00:17:04.900 { 00:17:04.900 "name": "nvme0n1", 00:17:04.900 "aliases": [ 00:17:04.900 "659f11a5-e34b-44ec-b70d-36ed53606ae9" 00:17:04.900 ], 00:17:04.900 "product_name": "NVMe disk", 00:17:04.900 "block_size": 512, 00:17:04.900 "num_blocks": 2097152, 00:17:04.900 "uuid": "659f11a5-e34b-44ec-b70d-36ed53606ae9", 00:17:04.900 "assigned_rate_limits": { 00:17:04.900 "rw_ios_per_sec": 0, 00:17:04.900 "rw_mbytes_per_sec": 0, 00:17:04.900 "r_mbytes_per_sec": 0, 00:17:04.900 "w_mbytes_per_sec": 0 00:17:04.900 }, 00:17:04.900 "claimed": false, 00:17:04.900 "zoned": false, 00:17:04.900 "supported_io_types": { 00:17:04.900 "read": true, 00:17:04.900 "write": true, 00:17:04.900 "unmap": false, 00:17:04.900 "write_zeroes": true, 00:17:04.900 "flush": true, 00:17:04.900 "reset": true, 00:17:04.900 "compare": true, 00:17:04.901 "compare_and_write": true, 00:17:04.901 "abort": true, 00:17:04.901 "nvme_admin": true, 00:17:04.901 "nvme_io": true 00:17:04.901 }, 00:17:04.901 "memory_domains": [ 00:17:04.901 { 00:17:04.901 "dma_device_id": "system", 00:17:04.901 "dma_device_type": 1 00:17:04.901 } 00:17:04.901 ], 00:17:04.901 "driver_specific": { 00:17:04.901 "nvme": [ 00:17:04.901 { 00:17:04.901 "trid": { 00:17:04.901 "trtype": "TCP", 00:17:04.901 "adrfam": "IPv4", 00:17:04.901 "traddr": "10.0.0.2", 00:17:04.901 "trsvcid": "4420", 00:17:04.901 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:04.901 }, 00:17:04.901 "ctrlr_data": { 00:17:04.901 "cntlid": 2, 00:17:04.901 "vendor_id": "0x8086", 00:17:04.901 "model_number": "SPDK bdev Controller", 00:17:04.901 "serial_number": "00000000000000000000", 00:17:04.901 "firmware_revision": "24.05", 00:17:04.901 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:04.901 "oacs": { 00:17:04.901 "security": 0, 00:17:04.901 "format": 0, 00:17:04.901 "firmware": 0, 00:17:04.901 "ns_manage": 0 00:17:04.901 }, 00:17:04.901 "multi_ctrlr": true, 00:17:04.901 "ana_reporting": false 00:17:04.901 }, 00:17:04.901 "vs": { 00:17:04.901 "nvme_version": "1.3" 00:17:04.901 }, 00:17:04.901 "ns_data": { 00:17:04.901 "id": 1, 00:17:04.901 "can_share": true 00:17:04.901 } 00:17:04.901 } 00:17:04.901 ], 00:17:04.901 "mp_policy": "active_passive" 00:17:04.901 } 00:17:04.901 } 00:17:04.901 ] 00:17:04.901 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.901 13:46:07 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.901 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.901 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.158 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.158 13:46:07 -- host/async_init.sh@53 -- # mktemp 00:17:05.158 13:46:07 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.FJPpU3PoTX 00:17:05.158 13:46:07 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:05.158 13:46:07 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.FJPpU3PoTX 00:17:05.158 13:46:07 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:05.158 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.158 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.158 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.158 13:46:07 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:17:05.158 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.158 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.158 [2024-04-18 13:46:07.728926] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:05.158 [2024-04-18 13:46:07.729066] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:05.158 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.158 13:46:07 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FJPpU3PoTX 00:17:05.158 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.158 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.158 [2024-04-18 13:46:07.736948] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:05.158 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.158 13:46:07 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FJPpU3PoTX 00:17:05.158 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.158 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.158 [2024-04-18 13:46:07.744962] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:05.158 [2024-04-18 13:46:07.745030] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:05.158 nvme0n1 00:17:05.158 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.158 13:46:07 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:05.158 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.158 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.158 [ 00:17:05.158 { 00:17:05.158 "name": "nvme0n1", 00:17:05.158 "aliases": [ 00:17:05.158 "659f11a5-e34b-44ec-b70d-36ed53606ae9" 00:17:05.158 ], 00:17:05.158 "product_name": "NVMe disk", 00:17:05.158 "block_size": 512, 00:17:05.158 "num_blocks": 2097152, 00:17:05.158 "uuid": "659f11a5-e34b-44ec-b70d-36ed53606ae9", 00:17:05.158 "assigned_rate_limits": { 00:17:05.158 "rw_ios_per_sec": 0, 00:17:05.158 "rw_mbytes_per_sec": 0, 00:17:05.158 "r_mbytes_per_sec": 0, 00:17:05.158 "w_mbytes_per_sec": 0 00:17:05.158 }, 00:17:05.158 "claimed": false, 00:17:05.158 "zoned": false, 00:17:05.158 "supported_io_types": { 00:17:05.158 "read": true, 00:17:05.158 "write": true, 00:17:05.158 "unmap": false, 00:17:05.158 "write_zeroes": true, 00:17:05.158 "flush": true, 00:17:05.158 "reset": true, 00:17:05.158 "compare": true, 00:17:05.158 "compare_and_write": true, 00:17:05.158 "abort": true, 00:17:05.158 "nvme_admin": true, 00:17:05.158 "nvme_io": true 00:17:05.158 }, 00:17:05.158 "memory_domains": [ 00:17:05.158 { 00:17:05.158 "dma_device_id": "system", 00:17:05.158 "dma_device_type": 1 00:17:05.158 } 00:17:05.158 ], 00:17:05.158 "driver_specific": { 00:17:05.158 "nvme": [ 00:17:05.159 { 00:17:05.159 "trid": { 00:17:05.159 "trtype": "TCP", 00:17:05.159 "adrfam": "IPv4", 00:17:05.159 "traddr": "10.0.0.2", 00:17:05.159 "trsvcid": "4421", 00:17:05.159 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:05.159 }, 00:17:05.159 "ctrlr_data": { 00:17:05.159 "cntlid": 3, 00:17:05.159 "vendor_id": "0x8086", 00:17:05.159 "model_number": "SPDK bdev Controller", 00:17:05.159 "serial_number": "00000000000000000000", 00:17:05.159 "firmware_revision": "24.05", 00:17:05.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:05.159 "oacs": { 00:17:05.159 "security": 0, 00:17:05.159 "format": 0, 00:17:05.159 "firmware": 0, 00:17:05.159 "ns_manage": 0 00:17:05.159 }, 00:17:05.159 "multi_ctrlr": true, 00:17:05.159 "ana_reporting": false 00:17:05.159 }, 00:17:05.159 "vs": { 00:17:05.159 "nvme_version": "1.3" 00:17:05.159 }, 00:17:05.159 "ns_data": { 00:17:05.159 "id": 1, 00:17:05.159 "can_share": true 00:17:05.159 } 00:17:05.159 } 00:17:05.159 ], 00:17:05.159 "mp_policy": "active_passive" 00:17:05.159 } 00:17:05.159 } 00:17:05.159 ] 00:17:05.159 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.159 13:46:07 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.159 13:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.159 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.159 13:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.159 13:46:07 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.FJPpU3PoTX 00:17:05.159 13:46:07 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:05.159 13:46:07 -- host/async_init.sh@78 -- # nvmftestfini 00:17:05.159 13:46:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:05.159 13:46:07 -- nvmf/common.sh@117 -- # sync 00:17:05.159 13:46:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.159 13:46:07 -- nvmf/common.sh@120 -- # set +e 00:17:05.159 13:46:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.159 13:46:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.159 rmmod nvme_tcp 00:17:05.159 rmmod nvme_fabrics 00:17:05.159 rmmod nvme_keyring 00:17:05.159 13:46:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.159 13:46:07 -- nvmf/common.sh@124 -- # set -e 00:17:05.159 13:46:07 -- nvmf/common.sh@125 -- # return 0 00:17:05.159 13:46:07 -- nvmf/common.sh@478 -- # '[' -n 2635680 ']' 00:17:05.159 13:46:07 -- nvmf/common.sh@479 -- # killprocess 2635680 00:17:05.159 13:46:07 -- common/autotest_common.sh@936 -- # '[' -z 2635680 ']' 00:17:05.159 13:46:07 -- common/autotest_common.sh@940 -- # kill -0 2635680 00:17:05.159 13:46:07 -- common/autotest_common.sh@941 -- # uname 00:17:05.159 13:46:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:05.159 13:46:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2635680 00:17:05.159 13:46:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:05.159 13:46:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:05.159 13:46:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2635680' 00:17:05.159 killing process with pid 2635680 00:17:05.159 13:46:07 -- common/autotest_common.sh@955 -- # kill 2635680 00:17:05.159 [2024-04-18 13:46:07.932275] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:05.159 [2024-04-18 13:46:07.932314] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:05.159 13:46:07 -- common/autotest_common.sh@960 -- # wait 2635680 00:17:05.416 13:46:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:05.416 13:46:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:05.416 13:46:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:05.416 13:46:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:05.416 13:46:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:05.416 13:46:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.416 13:46:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.416 13:46:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.947 13:46:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:07.947 00:17:07.947 real 0m5.617s 00:17:07.947 user 0m2.175s 00:17:07.947 sys 0m1.814s 00:17:07.947 13:46:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:07.947 13:46:10 -- common/autotest_common.sh@10 -- # set +x 00:17:07.947 ************************************ 00:17:07.947 END TEST nvmf_async_init 00:17:07.947 ************************************ 00:17:07.947 13:46:10 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:07.947 13:46:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:07.947 13:46:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:07.947 13:46:10 -- common/autotest_common.sh@10 -- # set +x 00:17:07.947 ************************************ 00:17:07.947 START TEST dma 00:17:07.947 ************************************ 00:17:07.947 13:46:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:07.947 * Looking for test storage... 00:17:07.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:07.947 13:46:10 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.947 13:46:10 -- nvmf/common.sh@7 -- # uname -s 00:17:07.947 13:46:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.947 13:46:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.947 13:46:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.947 13:46:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.947 13:46:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.947 13:46:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.947 13:46:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.947 13:46:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.947 13:46:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.947 13:46:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.947 13:46:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:07.947 13:46:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:07.947 13:46:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.947 13:46:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.947 13:46:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.947 13:46:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.947 13:46:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.947 13:46:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.947 13:46:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.947 13:46:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.947 13:46:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.947 13:46:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.947 13:46:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.947 13:46:10 -- paths/export.sh@5 -- # export PATH 00:17:07.947 13:46:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.947 13:46:10 -- nvmf/common.sh@47 -- # : 0 00:17:07.947 13:46:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:07.947 13:46:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:07.947 13:46:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.947 13:46:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.947 13:46:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.947 13:46:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:07.947 13:46:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:07.947 13:46:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:07.947 13:46:10 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:07.947 13:46:10 -- host/dma.sh@13 -- # exit 0 00:17:07.947 00:17:07.947 real 0m0.069s 00:17:07.947 user 0m0.031s 00:17:07.947 sys 0m0.044s 00:17:07.947 13:46:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:07.947 13:46:10 -- common/autotest_common.sh@10 -- # set +x 00:17:07.947 ************************************ 00:17:07.947 END TEST dma 00:17:07.947 ************************************ 00:17:07.947 13:46:10 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:07.947 13:46:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:07.947 13:46:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:07.947 13:46:10 -- common/autotest_common.sh@10 -- # set +x 00:17:07.947 ************************************ 00:17:07.947 START TEST nvmf_identify 00:17:07.947 ************************************ 00:17:07.947 13:46:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:07.947 * Looking for test storage... 00:17:07.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:07.947 13:46:10 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.947 13:46:10 -- nvmf/common.sh@7 -- # uname -s 00:17:07.947 13:46:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.947 13:46:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.947 13:46:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.947 13:46:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.947 13:46:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.947 13:46:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.947 13:46:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.947 13:46:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.947 13:46:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.947 13:46:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.947 13:46:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:07.947 13:46:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:07.947 13:46:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.947 13:46:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.947 13:46:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.947 13:46:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.947 13:46:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.947 13:46:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.947 13:46:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.947 13:46:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.947 13:46:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.947 13:46:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.947 13:46:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.947 13:46:10 -- paths/export.sh@5 -- # export PATH 00:17:07.947 13:46:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.947 13:46:10 -- nvmf/common.sh@47 -- # : 0 00:17:07.947 13:46:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:07.947 13:46:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:07.947 13:46:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.947 13:46:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.947 13:46:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.947 13:46:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:07.947 13:46:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:07.947 13:46:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:07.947 13:46:10 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:07.947 13:46:10 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:07.947 13:46:10 -- host/identify.sh@14 -- # nvmftestinit 00:17:07.947 13:46:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:07.947 13:46:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.947 13:46:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:07.947 13:46:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:07.947 13:46:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:07.947 13:46:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.947 13:46:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.947 13:46:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.947 13:46:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:07.947 13:46:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:07.947 13:46:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:07.947 13:46:10 -- common/autotest_common.sh@10 -- # set +x 00:17:09.848 13:46:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:09.848 13:46:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:09.848 13:46:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:09.848 13:46:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:09.848 13:46:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:09.848 13:46:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:09.848 13:46:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:09.848 13:46:12 -- nvmf/common.sh@295 -- # net_devs=() 00:17:09.848 13:46:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:09.848 13:46:12 -- nvmf/common.sh@296 -- # e810=() 00:17:09.848 13:46:12 -- nvmf/common.sh@296 -- # local -ga e810 00:17:09.848 13:46:12 -- nvmf/common.sh@297 -- # x722=() 00:17:09.848 13:46:12 -- nvmf/common.sh@297 -- # local -ga x722 00:17:09.848 13:46:12 -- nvmf/common.sh@298 -- # mlx=() 00:17:09.848 13:46:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:09.848 13:46:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.848 13:46:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.848 13:46:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.848 13:46:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.848 13:46:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.848 13:46:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.848 13:46:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.848 13:46:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.848 13:46:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.848 13:46:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.848 13:46:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.848 13:46:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:09.848 13:46:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:09.848 13:46:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:09.848 13:46:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:09.848 13:46:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:09.848 13:46:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:09.848 13:46:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.848 13:46:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:09.848 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:09.848 13:46:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.848 13:46:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.848 13:46:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.848 13:46:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.848 13:46:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.848 13:46:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.848 13:46:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:09.848 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:09.848 13:46:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.848 13:46:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.848 13:46:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.848 13:46:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.848 13:46:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.848 13:46:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:09.848 13:46:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:09.848 13:46:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:09.848 13:46:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.848 13:46:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.848 13:46:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:09.848 13:46:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.848 13:46:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:09.848 Found net devices under 0000:84:00.0: cvl_0_0 00:17:09.848 13:46:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.848 13:46:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.848 13:46:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.848 13:46:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:09.848 13:46:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.848 13:46:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:09.848 Found net devices under 0000:84:00.1: cvl_0_1 00:17:09.848 13:46:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.849 13:46:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:09.849 13:46:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:09.849 13:46:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:09.849 13:46:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:09.849 13:46:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:09.849 13:46:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.849 13:46:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.849 13:46:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.849 13:46:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:09.849 13:46:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.849 13:46:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.849 13:46:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:09.849 13:46:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.849 13:46:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.849 13:46:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:09.849 13:46:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:09.849 13:46:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.849 13:46:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.849 13:46:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.849 13:46:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.849 13:46:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:09.849 13:46:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.107 13:46:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.107 13:46:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.107 13:46:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:17:10.107 00:17:10.107 --- 10.0.0.2 ping statistics --- 00:17:10.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.107 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:17:10.107 13:46:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:17:10.108 00:17:10.108 --- 10.0.0.1 ping statistics --- 00:17:10.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.108 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:17:10.108 13:46:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.108 13:46:12 -- nvmf/common.sh@411 -- # return 0 00:17:10.108 13:46:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:10.108 13:46:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.108 13:46:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:10.108 13:46:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:10.108 13:46:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.108 13:46:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:10.108 13:46:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:10.108 13:46:12 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:10.108 13:46:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:10.108 13:46:12 -- common/autotest_common.sh@10 -- # set +x 00:17:10.108 13:46:12 -- host/identify.sh@19 -- # nvmfpid=2637839 00:17:10.108 13:46:12 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:10.108 13:46:12 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:10.108 13:46:12 -- host/identify.sh@23 -- # waitforlisten 2637839 00:17:10.108 13:46:12 -- common/autotest_common.sh@817 -- # '[' -z 2637839 ']' 00:17:10.108 13:46:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.108 13:46:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:10.108 13:46:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.108 13:46:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:10.108 13:46:12 -- common/autotest_common.sh@10 -- # set +x 00:17:10.108 [2024-04-18 13:46:12.748944] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:10.108 [2024-04-18 13:46:12.749017] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.108 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.108 [2024-04-18 13:46:12.816390] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:10.366 [2024-04-18 13:46:12.928400] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.366 [2024-04-18 13:46:12.928450] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.366 [2024-04-18 13:46:12.928463] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.366 [2024-04-18 13:46:12.928488] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.366 [2024-04-18 13:46:12.928498] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.366 [2024-04-18 13:46:12.928584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.366 [2024-04-18 13:46:12.928651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.366 [2024-04-18 13:46:12.928719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.366 [2024-04-18 13:46:12.928722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.366 13:46:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:10.366 13:46:13 -- common/autotest_common.sh@850 -- # return 0 00:17:10.366 13:46:13 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:10.366 13:46:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.366 13:46:13 -- common/autotest_common.sh@10 -- # set +x 00:17:10.366 [2024-04-18 13:46:13.056720] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.366 13:46:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.366 13:46:13 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:10.366 13:46:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:10.366 13:46:13 -- common/autotest_common.sh@10 -- # set +x 00:17:10.366 13:46:13 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:10.366 13:46:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.366 13:46:13 -- common/autotest_common.sh@10 -- # set +x 00:17:10.366 Malloc0 00:17:10.366 13:46:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.366 13:46:13 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:10.366 13:46:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.366 13:46:13 -- common/autotest_common.sh@10 -- # set +x 00:17:10.366 13:46:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.366 13:46:13 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:10.366 13:46:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.366 13:46:13 -- common/autotest_common.sh@10 -- # set +x 00:17:10.366 13:46:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.366 13:46:13 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.366 13:46:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.366 13:46:13 -- common/autotest_common.sh@10 -- # set +x 00:17:10.366 [2024-04-18 13:46:13.128773] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.366 13:46:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.366 13:46:13 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:10.366 13:46:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.366 13:46:13 -- common/autotest_common.sh@10 -- # set +x 00:17:10.366 13:46:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.366 13:46:13 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:10.366 13:46:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.366 13:46:13 -- common/autotest_common.sh@10 -- # set +x 00:17:10.366 [2024-04-18 13:46:13.144561] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:10.366 [ 00:17:10.366 { 00:17:10.366 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:10.366 "subtype": "Discovery", 00:17:10.366 "listen_addresses": [ 00:17:10.366 { 00:17:10.366 "transport": "TCP", 00:17:10.366 "trtype": "TCP", 00:17:10.366 "adrfam": "IPv4", 00:17:10.366 "traddr": "10.0.0.2", 00:17:10.366 "trsvcid": "4420" 00:17:10.366 } 00:17:10.366 ], 00:17:10.366 "allow_any_host": true, 00:17:10.366 "hosts": [] 00:17:10.366 }, 00:17:10.366 { 00:17:10.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.366 "subtype": "NVMe", 00:17:10.366 "listen_addresses": [ 00:17:10.366 { 00:17:10.366 "transport": "TCP", 00:17:10.366 "trtype": "TCP", 00:17:10.366 "adrfam": "IPv4", 00:17:10.366 "traddr": "10.0.0.2", 00:17:10.366 "trsvcid": "4420" 00:17:10.366 } 00:17:10.366 ], 00:17:10.366 "allow_any_host": true, 00:17:10.366 "hosts": [], 00:17:10.366 "serial_number": "SPDK00000000000001", 00:17:10.366 "model_number": "SPDK bdev Controller", 00:17:10.366 "max_namespaces": 32, 00:17:10.366 "min_cntlid": 1, 00:17:10.366 "max_cntlid": 65519, 00:17:10.366 "namespaces": [ 00:17:10.366 { 00:17:10.366 "nsid": 1, 00:17:10.366 "bdev_name": "Malloc0", 00:17:10.366 "name": "Malloc0", 00:17:10.366 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:10.366 "eui64": "ABCDEF0123456789", 00:17:10.366 "uuid": "0bb614b4-7d6e-4f19-ab2f-a98e8876db15" 00:17:10.366 } 00:17:10.366 ] 00:17:10.366 } 00:17:10.366 ] 00:17:10.366 13:46:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.366 13:46:13 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:10.366 [2024-04-18 13:46:13.166946] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:10.366 [2024-04-18 13:46:13.166986] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637985 ] 00:17:10.626 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.626 [2024-04-18 13:46:13.200401] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:10.626 [2024-04-18 13:46:13.200477] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:10.626 [2024-04-18 13:46:13.200488] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:10.626 [2024-04-18 13:46:13.200503] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:10.626 [2024-04-18 13:46:13.200515] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:10.626 [2024-04-18 13:46:13.200834] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:10.626 [2024-04-18 13:46:13.200891] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfd15b0 0 00:17:10.626 [2024-04-18 13:46:13.218189] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:10.626 [2024-04-18 13:46:13.218212] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:10.626 [2024-04-18 13:46:13.218220] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:10.626 [2024-04-18 13:46:13.218226] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:10.626 [2024-04-18 13:46:13.218286] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.218298] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.218306] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd15b0) 00:17:10.626 [2024-04-18 13:46:13.218325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:10.626 [2024-04-18 13:46:13.218353] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031410, cid 0, qid 0 00:17:10.626 [2024-04-18 13:46:13.228188] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.626 [2024-04-18 13:46:13.228206] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.626 [2024-04-18 13:46:13.228213] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.228221] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031410) on tqpair=0xfd15b0 00:17:10.626 [2024-04-18 13:46:13.228248] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:10.626 [2024-04-18 13:46:13.228267] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:10.626 [2024-04-18 13:46:13.228276] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:10.626 [2024-04-18 13:46:13.228297] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.228305] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.228311] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd15b0) 00:17:10.626 [2024-04-18 13:46:13.228323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.626 [2024-04-18 13:46:13.228346] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031410, cid 0, qid 0 00:17:10.626 [2024-04-18 13:46:13.228529] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.626 [2024-04-18 13:46:13.228544] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.626 [2024-04-18 13:46:13.228550] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.228556] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031410) on tqpair=0xfd15b0 00:17:10.626 [2024-04-18 13:46:13.228567] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:10.626 [2024-04-18 13:46:13.228579] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:10.626 [2024-04-18 13:46:13.228591] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.228598] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.228604] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd15b0) 00:17:10.626 [2024-04-18 13:46:13.228614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.626 [2024-04-18 13:46:13.228635] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031410, cid 0, qid 0 00:17:10.626 [2024-04-18 13:46:13.228752] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.626 [2024-04-18 13:46:13.228766] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.626 [2024-04-18 13:46:13.228773] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.228779] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031410) on tqpair=0xfd15b0 00:17:10.626 [2024-04-18 13:46:13.228788] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:10.626 [2024-04-18 13:46:13.228803] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:10.626 [2024-04-18 13:46:13.228814] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.228821] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.228827] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd15b0) 00:17:10.626 [2024-04-18 13:46:13.228836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.626 [2024-04-18 13:46:13.228856] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031410, cid 0, qid 0 00:17:10.626 [2024-04-18 13:46:13.228957] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.626 [2024-04-18 13:46:13.228971] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.626 [2024-04-18 13:46:13.228977] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.228983] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031410) on tqpair=0xfd15b0 00:17:10.626 [2024-04-18 13:46:13.228993] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:10.626 [2024-04-18 13:46:13.229013] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.229022] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.229028] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd15b0) 00:17:10.626 [2024-04-18 13:46:13.229038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.626 [2024-04-18 13:46:13.229058] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031410, cid 0, qid 0 00:17:10.626 [2024-04-18 13:46:13.229186] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.626 [2024-04-18 13:46:13.229200] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.626 [2024-04-18 13:46:13.229206] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.229212] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031410) on tqpair=0xfd15b0 00:17:10.626 [2024-04-18 13:46:13.229237] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:10.626 [2024-04-18 13:46:13.229246] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:10.626 [2024-04-18 13:46:13.229260] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:10.626 [2024-04-18 13:46:13.229370] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:10.626 [2024-04-18 13:46:13.229378] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:10.626 [2024-04-18 13:46:13.229393] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.229401] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.229407] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd15b0) 00:17:10.626 [2024-04-18 13:46:13.229417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.626 [2024-04-18 13:46:13.229439] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031410, cid 0, qid 0 00:17:10.626 [2024-04-18 13:46:13.229665] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.626 [2024-04-18 13:46:13.229679] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.626 [2024-04-18 13:46:13.229686] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.229692] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031410) on tqpair=0xfd15b0 00:17:10.626 [2024-04-18 13:46:13.229701] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:10.626 [2024-04-18 13:46:13.229717] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.229726] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.626 [2024-04-18 13:46:13.229732] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd15b0) 00:17:10.626 [2024-04-18 13:46:13.229742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.626 [2024-04-18 13:46:13.229761] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031410, cid 0, qid 0 00:17:10.626 [2024-04-18 13:46:13.229860] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.627 [2024-04-18 13:46:13.229871] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.627 [2024-04-18 13:46:13.229877] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.229887] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031410) on tqpair=0xfd15b0 00:17:10.627 [2024-04-18 13:46:13.229897] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:10.627 [2024-04-18 13:46:13.229904] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:10.627 [2024-04-18 13:46:13.229917] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:10.627 [2024-04-18 13:46:13.229936] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:10.627 [2024-04-18 13:46:13.229954] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.229962] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd15b0) 00:17:10.627 [2024-04-18 13:46:13.229972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.627 [2024-04-18 13:46:13.229992] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031410, cid 0, qid 0 00:17:10.627 [2024-04-18 13:46:13.230128] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:10.627 [2024-04-18 13:46:13.230139] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:10.627 [2024-04-18 13:46:13.230146] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.230152] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd15b0): datao=0, datal=4096, cccid=0 00:17:10.627 [2024-04-18 13:46:13.230175] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1031410) on tqpair(0xfd15b0): expected_datao=0, payload_size=4096 00:17:10.627 [2024-04-18 13:46:13.230194] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.230212] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.230222] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.271189] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.627 [2024-04-18 13:46:13.271208] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.627 [2024-04-18 13:46:13.271215] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.271222] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031410) on tqpair=0xfd15b0 00:17:10.627 [2024-04-18 13:46:13.271237] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:10.627 [2024-04-18 13:46:13.271246] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:10.627 [2024-04-18 13:46:13.271254] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:10.627 [2024-04-18 13:46:13.271262] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:10.627 [2024-04-18 13:46:13.271270] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:10.627 [2024-04-18 13:46:13.271278] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:10.627 [2024-04-18 13:46:13.271294] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:10.627 [2024-04-18 13:46:13.271312] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.271320] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.271327] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd15b0) 00:17:10.627 [2024-04-18 13:46:13.271338] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:10.627 [2024-04-18 13:46:13.271366] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031410, cid 0, qid 0 00:17:10.627 [2024-04-18 13:46:13.271604] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.627 [2024-04-18 13:46:13.271619] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.627 [2024-04-18 13:46:13.271625] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.271631] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031410) on tqpair=0xfd15b0 00:17:10.627 [2024-04-18 13:46:13.271645] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.271652] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.271658] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd15b0) 00:17:10.627 [2024-04-18 13:46:13.271667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.627 [2024-04-18 13:46:13.271677] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.271683] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.271689] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfd15b0) 00:17:10.627 [2024-04-18 13:46:13.271697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.627 [2024-04-18 13:46:13.271705] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.271712] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.271718] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfd15b0) 00:17:10.627 [2024-04-18 13:46:13.271725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.627 [2024-04-18 13:46:13.271734] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.271741] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.271746] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd15b0) 00:17:10.627 [2024-04-18 13:46:13.271754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.627 [2024-04-18 13:46:13.271762] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:10.627 [2024-04-18 13:46:13.271781] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:10.627 [2024-04-18 13:46:13.271793] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.271799] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd15b0) 00:17:10.627 [2024-04-18 13:46:13.271809] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.627 [2024-04-18 13:46:13.271830] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031410, cid 0, qid 0 00:17:10.627 [2024-04-18 13:46:13.271840] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031570, cid 1, qid 0 00:17:10.627 [2024-04-18 13:46:13.271848] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10316d0, cid 2, qid 0 00:17:10.627 [2024-04-18 13:46:13.271855] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031830, cid 3, qid 0 00:17:10.627 [2024-04-18 13:46:13.271862] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031990, cid 4, qid 0 00:17:10.627 [2024-04-18 13:46:13.272059] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.627 [2024-04-18 13:46:13.272073] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.627 [2024-04-18 13:46:13.272082] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.272089] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031990) on tqpair=0xfd15b0 00:17:10.627 [2024-04-18 13:46:13.272099] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:10.627 [2024-04-18 13:46:13.272108] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:10.627 [2024-04-18 13:46:13.272125] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.272134] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd15b0) 00:17:10.627 [2024-04-18 13:46:13.272144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.627 [2024-04-18 13:46:13.272187] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031990, cid 4, qid 0 00:17:10.627 [2024-04-18 13:46:13.272386] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:10.627 [2024-04-18 13:46:13.272402] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:10.627 [2024-04-18 13:46:13.272409] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.272415] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd15b0): datao=0, datal=4096, cccid=4 00:17:10.627 [2024-04-18 13:46:13.272423] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1031990) on tqpair(0xfd15b0): expected_datao=0, payload_size=4096 00:17:10.627 [2024-04-18 13:46:13.272430] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.272440] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.272448] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.272474] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.627 [2024-04-18 13:46:13.272484] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.627 [2024-04-18 13:46:13.272490] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.272496] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031990) on tqpair=0xfd15b0 00:17:10.627 [2024-04-18 13:46:13.272517] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:10.627 [2024-04-18 13:46:13.272562] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.272571] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd15b0) 00:17:10.627 [2024-04-18 13:46:13.272581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.627 [2024-04-18 13:46:13.272592] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.272598] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.627 [2024-04-18 13:46:13.272604] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfd15b0) 00:17:10.627 [2024-04-18 13:46:13.272612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.627 [2024-04-18 13:46:13.272638] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031990, cid 4, qid 0 00:17:10.627 [2024-04-18 13:46:13.272649] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031af0, cid 5, qid 0 00:17:10.628 [2024-04-18 13:46:13.272897] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:10.628 [2024-04-18 13:46:13.272911] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:10.628 [2024-04-18 13:46:13.272917] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.272923] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd15b0): datao=0, datal=1024, cccid=4 00:17:10.628 [2024-04-18 13:46:13.272930] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1031990) on tqpair(0xfd15b0): expected_datao=0, payload_size=1024 00:17:10.628 [2024-04-18 13:46:13.272940] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.272950] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.272957] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.272964] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.628 [2024-04-18 13:46:13.272973] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.628 [2024-04-18 13:46:13.272979] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.272985] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031af0) on tqpair=0xfd15b0 00:17:10.628 [2024-04-18 13:46:13.313320] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.628 [2024-04-18 13:46:13.313338] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.628 [2024-04-18 13:46:13.313345] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.313352] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031990) on tqpair=0xfd15b0 00:17:10.628 [2024-04-18 13:46:13.313376] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.313387] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd15b0) 00:17:10.628 [2024-04-18 13:46:13.313398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.628 [2024-04-18 13:46:13.313427] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031990, cid 4, qid 0 00:17:10.628 [2024-04-18 13:46:13.313582] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:10.628 [2024-04-18 13:46:13.313594] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:10.628 [2024-04-18 13:46:13.313600] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.313606] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd15b0): datao=0, datal=3072, cccid=4 00:17:10.628 [2024-04-18 13:46:13.313613] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1031990) on tqpair(0xfd15b0): expected_datao=0, payload_size=3072 00:17:10.628 [2024-04-18 13:46:13.313620] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.313629] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.313636] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.313680] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.628 [2024-04-18 13:46:13.313690] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.628 [2024-04-18 13:46:13.313696] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.313703] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031990) on tqpair=0xfd15b0 00:17:10.628 [2024-04-18 13:46:13.313718] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.313726] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd15b0) 00:17:10.628 [2024-04-18 13:46:13.313735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.628 [2024-04-18 13:46:13.313761] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031990, cid 4, qid 0 00:17:10.628 [2024-04-18 13:46:13.313876] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:10.628 [2024-04-18 13:46:13.313887] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:10.628 [2024-04-18 13:46:13.313894] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.313900] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd15b0): datao=0, datal=8, cccid=4 00:17:10.628 [2024-04-18 13:46:13.313907] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1031990) on tqpair(0xfd15b0): expected_datao=0, payload_size=8 00:17:10.628 [2024-04-18 13:46:13.313918] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.313928] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.313934] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.354323] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.628 [2024-04-18 13:46:13.354341] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.628 [2024-04-18 13:46:13.354348] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.628 [2024-04-18 13:46:13.354355] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031990) on tqpair=0xfd15b0 00:17:10.628 ===================================================== 00:17:10.628 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:10.628 ===================================================== 00:17:10.628 Controller Capabilities/Features 00:17:10.628 ================================ 00:17:10.628 Vendor ID: 0000 00:17:10.628 Subsystem Vendor ID: 0000 00:17:10.628 Serial Number: .................... 00:17:10.628 Model Number: ........................................ 00:17:10.628 Firmware Version: 24.05 00:17:10.628 Recommended Arb Burst: 0 00:17:10.628 IEEE OUI Identifier: 00 00 00 00:17:10.628 Multi-path I/O 00:17:10.628 May have multiple subsystem ports: No 00:17:10.628 May have multiple controllers: No 00:17:10.628 Associated with SR-IOV VF: No 00:17:10.628 Max Data Transfer Size: 131072 00:17:10.628 Max Number of Namespaces: 0 00:17:10.628 Max Number of I/O Queues: 1024 00:17:10.628 NVMe Specification Version (VS): 1.3 00:17:10.628 NVMe Specification Version (Identify): 1.3 00:17:10.628 Maximum Queue Entries: 128 00:17:10.628 Contiguous Queues Required: Yes 00:17:10.628 Arbitration Mechanisms Supported 00:17:10.628 Weighted Round Robin: Not Supported 00:17:10.628 Vendor Specific: Not Supported 00:17:10.628 Reset Timeout: 15000 ms 00:17:10.628 Doorbell Stride: 4 bytes 00:17:10.628 NVM Subsystem Reset: Not Supported 00:17:10.628 Command Sets Supported 00:17:10.628 NVM Command Set: Supported 00:17:10.628 Boot Partition: Not Supported 00:17:10.628 Memory Page Size Minimum: 4096 bytes 00:17:10.628 Memory Page Size Maximum: 4096 bytes 00:17:10.628 Persistent Memory Region: Not Supported 00:17:10.628 Optional Asynchronous Events Supported 00:17:10.628 Namespace Attribute Notices: Not Supported 00:17:10.628 Firmware Activation Notices: Not Supported 00:17:10.628 ANA Change Notices: Not Supported 00:17:10.628 PLE Aggregate Log Change Notices: Not Supported 00:17:10.628 LBA Status Info Alert Notices: Not Supported 00:17:10.628 EGE Aggregate Log Change Notices: Not Supported 00:17:10.628 Normal NVM Subsystem Shutdown event: Not Supported 00:17:10.628 Zone Descriptor Change Notices: Not Supported 00:17:10.628 Discovery Log Change Notices: Supported 00:17:10.628 Controller Attributes 00:17:10.628 128-bit Host Identifier: Not Supported 00:17:10.628 Non-Operational Permissive Mode: Not Supported 00:17:10.628 NVM Sets: Not Supported 00:17:10.628 Read Recovery Levels: Not Supported 00:17:10.628 Endurance Groups: Not Supported 00:17:10.628 Predictable Latency Mode: Not Supported 00:17:10.628 Traffic Based Keep ALive: Not Supported 00:17:10.628 Namespace Granularity: Not Supported 00:17:10.628 SQ Associations: Not Supported 00:17:10.628 UUID List: Not Supported 00:17:10.628 Multi-Domain Subsystem: Not Supported 00:17:10.628 Fixed Capacity Management: Not Supported 00:17:10.628 Variable Capacity Management: Not Supported 00:17:10.628 Delete Endurance Group: Not Supported 00:17:10.628 Delete NVM Set: Not Supported 00:17:10.628 Extended LBA Formats Supported: Not Supported 00:17:10.628 Flexible Data Placement Supported: Not Supported 00:17:10.628 00:17:10.628 Controller Memory Buffer Support 00:17:10.628 ================================ 00:17:10.628 Supported: No 00:17:10.628 00:17:10.628 Persistent Memory Region Support 00:17:10.628 ================================ 00:17:10.628 Supported: No 00:17:10.628 00:17:10.628 Admin Command Set Attributes 00:17:10.628 ============================ 00:17:10.628 Security Send/Receive: Not Supported 00:17:10.628 Format NVM: Not Supported 00:17:10.628 Firmware Activate/Download: Not Supported 00:17:10.628 Namespace Management: Not Supported 00:17:10.628 Device Self-Test: Not Supported 00:17:10.628 Directives: Not Supported 00:17:10.628 NVMe-MI: Not Supported 00:17:10.628 Virtualization Management: Not Supported 00:17:10.628 Doorbell Buffer Config: Not Supported 00:17:10.628 Get LBA Status Capability: Not Supported 00:17:10.628 Command & Feature Lockdown Capability: Not Supported 00:17:10.628 Abort Command Limit: 1 00:17:10.628 Async Event Request Limit: 4 00:17:10.628 Number of Firmware Slots: N/A 00:17:10.628 Firmware Slot 1 Read-Only: N/A 00:17:10.628 Firmware Activation Without Reset: N/A 00:17:10.628 Multiple Update Detection Support: N/A 00:17:10.628 Firmware Update Granularity: No Information Provided 00:17:10.628 Per-Namespace SMART Log: No 00:17:10.628 Asymmetric Namespace Access Log Page: Not Supported 00:17:10.628 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:10.628 Command Effects Log Page: Not Supported 00:17:10.628 Get Log Page Extended Data: Supported 00:17:10.628 Telemetry Log Pages: Not Supported 00:17:10.628 Persistent Event Log Pages: Not Supported 00:17:10.628 Supported Log Pages Log Page: May Support 00:17:10.628 Commands Supported & Effects Log Page: Not Supported 00:17:10.628 Feature Identifiers & Effects Log Page:May Support 00:17:10.628 NVMe-MI Commands & Effects Log Page: May Support 00:17:10.628 Data Area 4 for Telemetry Log: Not Supported 00:17:10.628 Error Log Page Entries Supported: 128 00:17:10.628 Keep Alive: Not Supported 00:17:10.629 00:17:10.629 NVM Command Set Attributes 00:17:10.629 ========================== 00:17:10.629 Submission Queue Entry Size 00:17:10.629 Max: 1 00:17:10.629 Min: 1 00:17:10.629 Completion Queue Entry Size 00:17:10.629 Max: 1 00:17:10.629 Min: 1 00:17:10.629 Number of Namespaces: 0 00:17:10.629 Compare Command: Not Supported 00:17:10.629 Write Uncorrectable Command: Not Supported 00:17:10.629 Dataset Management Command: Not Supported 00:17:10.629 Write Zeroes Command: Not Supported 00:17:10.629 Set Features Save Field: Not Supported 00:17:10.629 Reservations: Not Supported 00:17:10.629 Timestamp: Not Supported 00:17:10.629 Copy: Not Supported 00:17:10.629 Volatile Write Cache: Not Present 00:17:10.629 Atomic Write Unit (Normal): 1 00:17:10.629 Atomic Write Unit (PFail): 1 00:17:10.629 Atomic Compare & Write Unit: 1 00:17:10.629 Fused Compare & Write: Supported 00:17:10.629 Scatter-Gather List 00:17:10.629 SGL Command Set: Supported 00:17:10.629 SGL Keyed: Supported 00:17:10.629 SGL Bit Bucket Descriptor: Not Supported 00:17:10.629 SGL Metadata Pointer: Not Supported 00:17:10.629 Oversized SGL: Not Supported 00:17:10.629 SGL Metadata Address: Not Supported 00:17:10.629 SGL Offset: Supported 00:17:10.629 Transport SGL Data Block: Not Supported 00:17:10.629 Replay Protected Memory Block: Not Supported 00:17:10.629 00:17:10.629 Firmware Slot Information 00:17:10.629 ========================= 00:17:10.629 Active slot: 0 00:17:10.629 00:17:10.629 00:17:10.629 Error Log 00:17:10.629 ========= 00:17:10.629 00:17:10.629 Active Namespaces 00:17:10.629 ================= 00:17:10.629 Discovery Log Page 00:17:10.629 ================== 00:17:10.629 Generation Counter: 2 00:17:10.629 Number of Records: 2 00:17:10.629 Record Format: 0 00:17:10.629 00:17:10.629 Discovery Log Entry 0 00:17:10.629 ---------------------- 00:17:10.629 Transport Type: 3 (TCP) 00:17:10.629 Address Family: 1 (IPv4) 00:17:10.629 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:10.629 Entry Flags: 00:17:10.629 Duplicate Returned Information: 1 00:17:10.629 Explicit Persistent Connection Support for Discovery: 1 00:17:10.629 Transport Requirements: 00:17:10.629 Secure Channel: Not Required 00:17:10.629 Port ID: 0 (0x0000) 00:17:10.629 Controller ID: 65535 (0xffff) 00:17:10.629 Admin Max SQ Size: 128 00:17:10.629 Transport Service Identifier: 4420 00:17:10.629 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:10.629 Transport Address: 10.0.0.2 00:17:10.629 Discovery Log Entry 1 00:17:10.629 ---------------------- 00:17:10.629 Transport Type: 3 (TCP) 00:17:10.629 Address Family: 1 (IPv4) 00:17:10.629 Subsystem Type: 2 (NVM Subsystem) 00:17:10.629 Entry Flags: 00:17:10.629 Duplicate Returned Information: 0 00:17:10.629 Explicit Persistent Connection Support for Discovery: 0 00:17:10.629 Transport Requirements: 00:17:10.629 Secure Channel: Not Required 00:17:10.629 Port ID: 0 (0x0000) 00:17:10.629 Controller ID: 65535 (0xffff) 00:17:10.629 Admin Max SQ Size: 128 00:17:10.629 Transport Service Identifier: 4420 00:17:10.629 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:10.629 Transport Address: 10.0.0.2 [2024-04-18 13:46:13.354468] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:10.629 [2024-04-18 13:46:13.354509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.629 [2024-04-18 13:46:13.354527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.629 [2024-04-18 13:46:13.354536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.629 [2024-04-18 13:46:13.354544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.629 [2024-04-18 13:46:13.354557] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.629 [2024-04-18 13:46:13.354564] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.629 [2024-04-18 13:46:13.354570] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd15b0) 00:17:10.629 [2024-04-18 13:46:13.354580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.629 [2024-04-18 13:46:13.354604] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031830, cid 3, qid 0 00:17:10.629 [2024-04-18 13:46:13.354802] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.629 [2024-04-18 13:46:13.354814] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.629 [2024-04-18 13:46:13.354820] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.629 [2024-04-18 13:46:13.354826] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031830) on tqpair=0xfd15b0 00:17:10.629 [2024-04-18 13:46:13.354838] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.629 [2024-04-18 13:46:13.354846] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.629 [2024-04-18 13:46:13.354852] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd15b0) 00:17:10.629 [2024-04-18 13:46:13.354861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.629 [2024-04-18 13:46:13.354895] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031830, cid 3, qid 0 00:17:10.629 [2024-04-18 13:46:13.355025] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.629 [2024-04-18 13:46:13.355039] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.629 [2024-04-18 13:46:13.355045] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.629 [2024-04-18 13:46:13.355051] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031830) on tqpair=0xfd15b0 00:17:10.629 [2024-04-18 13:46:13.355060] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:10.629 [2024-04-18 13:46:13.355069] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:10.629 [2024-04-18 13:46:13.355084] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.629 [2024-04-18 13:46:13.355093] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.629 [2024-04-18 13:46:13.355099] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd15b0) 00:17:10.629 [2024-04-18 13:46:13.355113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.629 [2024-04-18 13:46:13.355133] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031830, cid 3, qid 0 00:17:10.629 [2024-04-18 13:46:13.359202] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.629 [2024-04-18 13:46:13.359218] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.629 [2024-04-18 13:46:13.359225] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.629 [2024-04-18 13:46:13.359231] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031830) on tqpair=0xfd15b0 00:17:10.629 [2024-04-18 13:46:13.359251] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.629 [2024-04-18 13:46:13.359261] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.629 [2024-04-18 13:46:13.359267] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd15b0) 00:17:10.629 [2024-04-18 13:46:13.359277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.629 [2024-04-18 13:46:13.359298] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1031830, cid 3, qid 0 00:17:10.629 [2024-04-18 13:46:13.359464] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.629 [2024-04-18 13:46:13.359475] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.629 [2024-04-18 13:46:13.359481] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.629 [2024-04-18 13:46:13.359503] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1031830) on tqpair=0xfd15b0 00:17:10.629 [2024-04-18 13:46:13.359517] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:17:10.629 00:17:10.629 13:46:13 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:10.629 [2024-04-18 13:46:13.391211] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:10.629 [2024-04-18 13:46:13.391255] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637987 ] 00:17:10.629 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.629 [2024-04-18 13:46:13.425025] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:10.629 [2024-04-18 13:46:13.425071] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:10.629 [2024-04-18 13:46:13.425081] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:10.629 [2024-04-18 13:46:13.425094] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:10.629 [2024-04-18 13:46:13.425105] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:10.629 [2024-04-18 13:46:13.425409] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:10.629 [2024-04-18 13:46:13.425450] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1db35b0 0 00:17:10.891 [2024-04-18 13:46:13.440189] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:10.891 [2024-04-18 13:46:13.440208] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:10.891 [2024-04-18 13:46:13.440216] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:10.891 [2024-04-18 13:46:13.440222] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:10.891 [2024-04-18 13:46:13.440267] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.440284] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.440292] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1db35b0) 00:17:10.891 [2024-04-18 13:46:13.440306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:10.891 [2024-04-18 13:46:13.440331] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13410, cid 0, qid 0 00:17:10.891 [2024-04-18 13:46:13.448188] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.891 [2024-04-18 13:46:13.448205] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.891 [2024-04-18 13:46:13.448212] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.448219] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13410) on tqpair=0x1db35b0 00:17:10.891 [2024-04-18 13:46:13.448239] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:10.891 [2024-04-18 13:46:13.448250] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:10.891 [2024-04-18 13:46:13.448259] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:10.891 [2024-04-18 13:46:13.448275] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.448284] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.448290] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1db35b0) 00:17:10.891 [2024-04-18 13:46:13.448301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.891 [2024-04-18 13:46:13.448324] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13410, cid 0, qid 0 00:17:10.891 [2024-04-18 13:46:13.448482] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.891 [2024-04-18 13:46:13.448494] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.891 [2024-04-18 13:46:13.448500] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.448507] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13410) on tqpair=0x1db35b0 00:17:10.891 [2024-04-18 13:46:13.448515] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:10.891 [2024-04-18 13:46:13.448528] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:10.891 [2024-04-18 13:46:13.448539] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.448546] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.448552] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1db35b0) 00:17:10.891 [2024-04-18 13:46:13.448562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.891 [2024-04-18 13:46:13.448582] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13410, cid 0, qid 0 00:17:10.891 [2024-04-18 13:46:13.448688] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.891 [2024-04-18 13:46:13.448702] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.891 [2024-04-18 13:46:13.448709] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.448715] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13410) on tqpair=0x1db35b0 00:17:10.891 [2024-04-18 13:46:13.448724] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:10.891 [2024-04-18 13:46:13.448738] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:10.891 [2024-04-18 13:46:13.448749] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.448756] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.448766] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1db35b0) 00:17:10.891 [2024-04-18 13:46:13.448776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.891 [2024-04-18 13:46:13.448797] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13410, cid 0, qid 0 00:17:10.891 [2024-04-18 13:46:13.448893] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.891 [2024-04-18 13:46:13.448904] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.891 [2024-04-18 13:46:13.448910] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.448916] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13410) on tqpair=0x1db35b0 00:17:10.891 [2024-04-18 13:46:13.448925] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:10.891 [2024-04-18 13:46:13.448941] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.448950] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.448956] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1db35b0) 00:17:10.891 [2024-04-18 13:46:13.448965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.891 [2024-04-18 13:46:13.448984] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13410, cid 0, qid 0 00:17:10.891 [2024-04-18 13:46:13.449085] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.891 [2024-04-18 13:46:13.449099] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.891 [2024-04-18 13:46:13.449106] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.449112] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13410) on tqpair=0x1db35b0 00:17:10.891 [2024-04-18 13:46:13.449120] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:10.891 [2024-04-18 13:46:13.449128] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:10.891 [2024-04-18 13:46:13.449141] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:10.891 [2024-04-18 13:46:13.449251] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:10.891 [2024-04-18 13:46:13.449261] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:10.891 [2024-04-18 13:46:13.449274] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.449281] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.891 [2024-04-18 13:46:13.449287] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1db35b0) 00:17:10.891 [2024-04-18 13:46:13.449297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.891 [2024-04-18 13:46:13.449319] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13410, cid 0, qid 0 00:17:10.891 [2024-04-18 13:46:13.449459] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.891 [2024-04-18 13:46:13.449484] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.892 [2024-04-18 13:46:13.449491] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.449498] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13410) on tqpair=0x1db35b0 00:17:10.892 [2024-04-18 13:46:13.449506] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:10.892 [2024-04-18 13:46:13.449523] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.449534] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.449541] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1db35b0) 00:17:10.892 [2024-04-18 13:46:13.449551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.892 [2024-04-18 13:46:13.449570] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13410, cid 0, qid 0 00:17:10.892 [2024-04-18 13:46:13.449674] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.892 [2024-04-18 13:46:13.449688] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.892 [2024-04-18 13:46:13.449694] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.449701] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13410) on tqpair=0x1db35b0 00:17:10.892 [2024-04-18 13:46:13.449709] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:10.892 [2024-04-18 13:46:13.449717] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:10.892 [2024-04-18 13:46:13.449730] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:10.892 [2024-04-18 13:46:13.449743] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:10.892 [2024-04-18 13:46:13.449758] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.449766] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1db35b0) 00:17:10.892 [2024-04-18 13:46:13.449776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.892 [2024-04-18 13:46:13.449796] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13410, cid 0, qid 0 00:17:10.892 [2024-04-18 13:46:13.449941] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:10.892 [2024-04-18 13:46:13.449953] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:10.892 [2024-04-18 13:46:13.449959] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.449965] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1db35b0): datao=0, datal=4096, cccid=0 00:17:10.892 [2024-04-18 13:46:13.449972] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e13410) on tqpair(0x1db35b0): expected_datao=0, payload_size=4096 00:17:10.892 [2024-04-18 13:46:13.449979] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.449989] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.449996] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.450028] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.892 [2024-04-18 13:46:13.450038] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.892 [2024-04-18 13:46:13.450045] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.450051] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13410) on tqpair=0x1db35b0 00:17:10.892 [2024-04-18 13:46:13.450062] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:10.892 [2024-04-18 13:46:13.450070] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:10.892 [2024-04-18 13:46:13.450077] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:10.892 [2024-04-18 13:46:13.450083] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:10.892 [2024-04-18 13:46:13.450090] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:10.892 [2024-04-18 13:46:13.450101] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:10.892 [2024-04-18 13:46:13.450115] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:10.892 [2024-04-18 13:46:13.450127] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.450134] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.450139] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1db35b0) 00:17:10.892 [2024-04-18 13:46:13.450149] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:10.892 [2024-04-18 13:46:13.450191] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13410, cid 0, qid 0 00:17:10.892 [2024-04-18 13:46:13.454200] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.892 [2024-04-18 13:46:13.454217] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.892 [2024-04-18 13:46:13.454224] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.454231] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13410) on tqpair=0x1db35b0 00:17:10.892 [2024-04-18 13:46:13.454243] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.454250] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.454256] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1db35b0) 00:17:10.892 [2024-04-18 13:46:13.454266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.892 [2024-04-18 13:46:13.454276] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.454282] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.454288] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1db35b0) 00:17:10.892 [2024-04-18 13:46:13.454297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.892 [2024-04-18 13:46:13.454306] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.454313] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.454318] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1db35b0) 00:17:10.892 [2024-04-18 13:46:13.454327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.892 [2024-04-18 13:46:13.454336] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.454342] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.454348] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.892 [2024-04-18 13:46:13.454356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.892 [2024-04-18 13:46:13.454365] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:10.892 [2024-04-18 13:46:13.454384] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:10.892 [2024-04-18 13:46:13.454397] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.454404] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1db35b0) 00:17:10.892 [2024-04-18 13:46:13.454414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.892 [2024-04-18 13:46:13.454437] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13410, cid 0, qid 0 00:17:10.892 [2024-04-18 13:46:13.454451] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13570, cid 1, qid 0 00:17:10.892 [2024-04-18 13:46:13.454460] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e136d0, cid 2, qid 0 00:17:10.892 [2024-04-18 13:46:13.454468] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.892 [2024-04-18 13:46:13.454475] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13990, cid 4, qid 0 00:17:10.892 [2024-04-18 13:46:13.454661] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.892 [2024-04-18 13:46:13.454675] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.892 [2024-04-18 13:46:13.454682] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.454688] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13990) on tqpair=0x1db35b0 00:17:10.892 [2024-04-18 13:46:13.454697] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:10.892 [2024-04-18 13:46:13.454705] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:10.892 [2024-04-18 13:46:13.454723] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:10.892 [2024-04-18 13:46:13.454735] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:10.892 [2024-04-18 13:46:13.454746] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.454753] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.454758] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1db35b0) 00:17:10.892 [2024-04-18 13:46:13.454768] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:10.892 [2024-04-18 13:46:13.454790] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13990, cid 4, qid 0 00:17:10.892 [2024-04-18 13:46:13.454979] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.892 [2024-04-18 13:46:13.454993] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.892 [2024-04-18 13:46:13.455000] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.455006] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13990) on tqpair=0x1db35b0 00:17:10.892 [2024-04-18 13:46:13.455057] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:10.892 [2024-04-18 13:46:13.455075] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:10.892 [2024-04-18 13:46:13.455089] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.892 [2024-04-18 13:46:13.455096] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1db35b0) 00:17:10.892 [2024-04-18 13:46:13.455106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.892 [2024-04-18 13:46:13.455126] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13990, cid 4, qid 0 00:17:10.892 [2024-04-18 13:46:13.455284] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:10.893 [2024-04-18 13:46:13.455299] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:10.893 [2024-04-18 13:46:13.455306] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.455313] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1db35b0): datao=0, datal=4096, cccid=4 00:17:10.893 [2024-04-18 13:46:13.455320] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e13990) on tqpair(0x1db35b0): expected_datao=0, payload_size=4096 00:17:10.893 [2024-04-18 13:46:13.455328] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.455349] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.455358] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.497191] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.893 [2024-04-18 13:46:13.497208] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.893 [2024-04-18 13:46:13.497216] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.497222] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13990) on tqpair=0x1db35b0 00:17:10.893 [2024-04-18 13:46:13.497240] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:10.893 [2024-04-18 13:46:13.497260] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:10.893 [2024-04-18 13:46:13.497279] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:10.893 [2024-04-18 13:46:13.497298] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.497305] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1db35b0) 00:17:10.893 [2024-04-18 13:46:13.497316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.893 [2024-04-18 13:46:13.497338] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13990, cid 4, qid 0 00:17:10.893 [2024-04-18 13:46:13.497518] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:10.893 [2024-04-18 13:46:13.497533] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:10.893 [2024-04-18 13:46:13.497539] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.497546] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1db35b0): datao=0, datal=4096, cccid=4 00:17:10.893 [2024-04-18 13:46:13.497553] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e13990) on tqpair(0x1db35b0): expected_datao=0, payload_size=4096 00:17:10.893 [2024-04-18 13:46:13.497560] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.497569] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.497576] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.497588] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.893 [2024-04-18 13:46:13.497597] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.893 [2024-04-18 13:46:13.497603] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.497610] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13990) on tqpair=0x1db35b0 00:17:10.893 [2024-04-18 13:46:13.497632] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:10.893 [2024-04-18 13:46:13.497650] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:10.893 [2024-04-18 13:46:13.497663] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.497670] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1db35b0) 00:17:10.893 [2024-04-18 13:46:13.497680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.893 [2024-04-18 13:46:13.497701] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13990, cid 4, qid 0 00:17:10.893 [2024-04-18 13:46:13.497820] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:10.893 [2024-04-18 13:46:13.497834] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:10.893 [2024-04-18 13:46:13.497840] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.497846] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1db35b0): datao=0, datal=4096, cccid=4 00:17:10.893 [2024-04-18 13:46:13.497857] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e13990) on tqpair(0x1db35b0): expected_datao=0, payload_size=4096 00:17:10.893 [2024-04-18 13:46:13.497864] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.497881] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.497890] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.538310] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.893 [2024-04-18 13:46:13.538329] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.893 [2024-04-18 13:46:13.538336] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.538344] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13990) on tqpair=0x1db35b0 00:17:10.893 [2024-04-18 13:46:13.538360] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:10.893 [2024-04-18 13:46:13.538376] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:10.893 [2024-04-18 13:46:13.538395] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:10.893 [2024-04-18 13:46:13.538406] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:10.893 [2024-04-18 13:46:13.538415] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:10.893 [2024-04-18 13:46:13.538424] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:10.893 [2024-04-18 13:46:13.538432] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:10.893 [2024-04-18 13:46:13.538441] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:10.893 [2024-04-18 13:46:13.538477] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.538486] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1db35b0) 00:17:10.893 [2024-04-18 13:46:13.538506] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.893 [2024-04-18 13:46:13.538517] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.538538] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.538545] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1db35b0) 00:17:10.893 [2024-04-18 13:46:13.538554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.893 [2024-04-18 13:46:13.538579] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13990, cid 4, qid 0 00:17:10.893 [2024-04-18 13:46:13.538590] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13af0, cid 5, qid 0 00:17:10.893 [2024-04-18 13:46:13.538806] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.893 [2024-04-18 13:46:13.538817] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.893 [2024-04-18 13:46:13.538824] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.538830] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13990) on tqpair=0x1db35b0 00:17:10.893 [2024-04-18 13:46:13.538840] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.893 [2024-04-18 13:46:13.538849] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.893 [2024-04-18 13:46:13.538856] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.538862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13af0) on tqpair=0x1db35b0 00:17:10.893 [2024-04-18 13:46:13.538882] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.538891] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1db35b0) 00:17:10.893 [2024-04-18 13:46:13.538901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.893 [2024-04-18 13:46:13.538920] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13af0, cid 5, qid 0 00:17:10.893 [2024-04-18 13:46:13.539030] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.893 [2024-04-18 13:46:13.539044] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.893 [2024-04-18 13:46:13.539051] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.539057] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13af0) on tqpair=0x1db35b0 00:17:10.893 [2024-04-18 13:46:13.539079] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.539088] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1db35b0) 00:17:10.893 [2024-04-18 13:46:13.539098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.893 [2024-04-18 13:46:13.539117] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13af0, cid 5, qid 0 00:17:10.893 [2024-04-18 13:46:13.539245] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.893 [2024-04-18 13:46:13.539258] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.893 [2024-04-18 13:46:13.539266] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.539272] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13af0) on tqpair=0x1db35b0 00:17:10.893 [2024-04-18 13:46:13.539289] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.539299] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1db35b0) 00:17:10.893 [2024-04-18 13:46:13.539309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.893 [2024-04-18 13:46:13.539329] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13af0, cid 5, qid 0 00:17:10.893 [2024-04-18 13:46:13.539436] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.893 [2024-04-18 13:46:13.539450] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.893 [2024-04-18 13:46:13.539472] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.539479] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13af0) on tqpair=0x1db35b0 00:17:10.893 [2024-04-18 13:46:13.539500] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.893 [2024-04-18 13:46:13.539510] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1db35b0) 00:17:10.894 [2024-04-18 13:46:13.539520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.894 [2024-04-18 13:46:13.539531] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.539538] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1db35b0) 00:17:10.894 [2024-04-18 13:46:13.539547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.894 [2024-04-18 13:46:13.539557] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.539564] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1db35b0) 00:17:10.894 [2024-04-18 13:46:13.539573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.894 [2024-04-18 13:46:13.539587] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.539595] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1db35b0) 00:17:10.894 [2024-04-18 13:46:13.539604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.894 [2024-04-18 13:46:13.539629] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13af0, cid 5, qid 0 00:17:10.894 [2024-04-18 13:46:13.539639] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13990, cid 4, qid 0 00:17:10.894 [2024-04-18 13:46:13.539646] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13c50, cid 6, qid 0 00:17:10.894 [2024-04-18 13:46:13.539653] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13db0, cid 7, qid 0 00:17:10.894 [2024-04-18 13:46:13.539910] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:10.894 [2024-04-18 13:46:13.539924] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:10.894 [2024-04-18 13:46:13.539931] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.539937] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1db35b0): datao=0, datal=8192, cccid=5 00:17:10.894 [2024-04-18 13:46:13.539944] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e13af0) on tqpair(0x1db35b0): expected_datao=0, payload_size=8192 00:17:10.894 [2024-04-18 13:46:13.539951] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540029] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540038] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540047] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:10.894 [2024-04-18 13:46:13.540055] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:10.894 [2024-04-18 13:46:13.540061] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540067] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1db35b0): datao=0, datal=512, cccid=4 00:17:10.894 [2024-04-18 13:46:13.540074] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e13990) on tqpair(0x1db35b0): expected_datao=0, payload_size=512 00:17:10.894 [2024-04-18 13:46:13.540081] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540089] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540096] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540104] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:10.894 [2024-04-18 13:46:13.540112] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:10.894 [2024-04-18 13:46:13.540118] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540124] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1db35b0): datao=0, datal=512, cccid=6 00:17:10.894 [2024-04-18 13:46:13.540131] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e13c50) on tqpair(0x1db35b0): expected_datao=0, payload_size=512 00:17:10.894 [2024-04-18 13:46:13.540137] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540146] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540152] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540184] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:10.894 [2024-04-18 13:46:13.540194] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:10.894 [2024-04-18 13:46:13.540201] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540207] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1db35b0): datao=0, datal=4096, cccid=7 00:17:10.894 [2024-04-18 13:46:13.540214] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e13db0) on tqpair(0x1db35b0): expected_datao=0, payload_size=4096 00:17:10.894 [2024-04-18 13:46:13.540225] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540235] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540242] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540253] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.894 [2024-04-18 13:46:13.540263] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.894 [2024-04-18 13:46:13.540269] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540275] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13af0) on tqpair=0x1db35b0 00:17:10.894 [2024-04-18 13:46:13.540295] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.894 [2024-04-18 13:46:13.540306] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.894 [2024-04-18 13:46:13.540313] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540320] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13990) on tqpair=0x1db35b0 00:17:10.894 [2024-04-18 13:46:13.540334] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.894 [2024-04-18 13:46:13.540345] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.894 [2024-04-18 13:46:13.540351] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540358] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13c50) on tqpair=0x1db35b0 00:17:10.894 [2024-04-18 13:46:13.540369] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.894 [2024-04-18 13:46:13.540378] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.894 [2024-04-18 13:46:13.540385] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.894 [2024-04-18 13:46:13.540391] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13db0) on tqpair=0x1db35b0 00:17:10.894 ===================================================== 00:17:10.894 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:10.894 ===================================================== 00:17:10.894 Controller Capabilities/Features 00:17:10.894 ================================ 00:17:10.894 Vendor ID: 8086 00:17:10.894 Subsystem Vendor ID: 8086 00:17:10.894 Serial Number: SPDK00000000000001 00:17:10.894 Model Number: SPDK bdev Controller 00:17:10.894 Firmware Version: 24.05 00:17:10.894 Recommended Arb Burst: 6 00:17:10.894 IEEE OUI Identifier: e4 d2 5c 00:17:10.894 Multi-path I/O 00:17:10.894 May have multiple subsystem ports: Yes 00:17:10.894 May have multiple controllers: Yes 00:17:10.894 Associated with SR-IOV VF: No 00:17:10.894 Max Data Transfer Size: 131072 00:17:10.894 Max Number of Namespaces: 32 00:17:10.894 Max Number of I/O Queues: 127 00:17:10.894 NVMe Specification Version (VS): 1.3 00:17:10.894 NVMe Specification Version (Identify): 1.3 00:17:10.894 Maximum Queue Entries: 128 00:17:10.894 Contiguous Queues Required: Yes 00:17:10.894 Arbitration Mechanisms Supported 00:17:10.894 Weighted Round Robin: Not Supported 00:17:10.894 Vendor Specific: Not Supported 00:17:10.894 Reset Timeout: 15000 ms 00:17:10.894 Doorbell Stride: 4 bytes 00:17:10.894 NVM Subsystem Reset: Not Supported 00:17:10.894 Command Sets Supported 00:17:10.894 NVM Command Set: Supported 00:17:10.894 Boot Partition: Not Supported 00:17:10.894 Memory Page Size Minimum: 4096 bytes 00:17:10.894 Memory Page Size Maximum: 4096 bytes 00:17:10.894 Persistent Memory Region: Not Supported 00:17:10.894 Optional Asynchronous Events Supported 00:17:10.894 Namespace Attribute Notices: Supported 00:17:10.894 Firmware Activation Notices: Not Supported 00:17:10.894 ANA Change Notices: Not Supported 00:17:10.894 PLE Aggregate Log Change Notices: Not Supported 00:17:10.894 LBA Status Info Alert Notices: Not Supported 00:17:10.894 EGE Aggregate Log Change Notices: Not Supported 00:17:10.894 Normal NVM Subsystem Shutdown event: Not Supported 00:17:10.894 Zone Descriptor Change Notices: Not Supported 00:17:10.894 Discovery Log Change Notices: Not Supported 00:17:10.894 Controller Attributes 00:17:10.894 128-bit Host Identifier: Supported 00:17:10.894 Non-Operational Permissive Mode: Not Supported 00:17:10.894 NVM Sets: Not Supported 00:17:10.894 Read Recovery Levels: Not Supported 00:17:10.894 Endurance Groups: Not Supported 00:17:10.894 Predictable Latency Mode: Not Supported 00:17:10.894 Traffic Based Keep ALive: Not Supported 00:17:10.895 Namespace Granularity: Not Supported 00:17:10.895 SQ Associations: Not Supported 00:17:10.895 UUID List: Not Supported 00:17:10.895 Multi-Domain Subsystem: Not Supported 00:17:10.895 Fixed Capacity Management: Not Supported 00:17:10.895 Variable Capacity Management: Not Supported 00:17:10.895 Delete Endurance Group: Not Supported 00:17:10.895 Delete NVM Set: Not Supported 00:17:10.895 Extended LBA Formats Supported: Not Supported 00:17:10.895 Flexible Data Placement Supported: Not Supported 00:17:10.895 00:17:10.895 Controller Memory Buffer Support 00:17:10.895 ================================ 00:17:10.895 Supported: No 00:17:10.895 00:17:10.895 Persistent Memory Region Support 00:17:10.895 ================================ 00:17:10.895 Supported: No 00:17:10.895 00:17:10.895 Admin Command Set Attributes 00:17:10.895 ============================ 00:17:10.895 Security Send/Receive: Not Supported 00:17:10.895 Format NVM: Not Supported 00:17:10.895 Firmware Activate/Download: Not Supported 00:17:10.895 Namespace Management: Not Supported 00:17:10.895 Device Self-Test: Not Supported 00:17:10.895 Directives: Not Supported 00:17:10.895 NVMe-MI: Not Supported 00:17:10.895 Virtualization Management: Not Supported 00:17:10.895 Doorbell Buffer Config: Not Supported 00:17:10.895 Get LBA Status Capability: Not Supported 00:17:10.895 Command & Feature Lockdown Capability: Not Supported 00:17:10.895 Abort Command Limit: 4 00:17:10.895 Async Event Request Limit: 4 00:17:10.895 Number of Firmware Slots: N/A 00:17:10.895 Firmware Slot 1 Read-Only: N/A 00:17:10.895 Firmware Activation Without Reset: N/A 00:17:10.895 Multiple Update Detection Support: N/A 00:17:10.895 Firmware Update Granularity: No Information Provided 00:17:10.895 Per-Namespace SMART Log: No 00:17:10.895 Asymmetric Namespace Access Log Page: Not Supported 00:17:10.895 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:10.895 Command Effects Log Page: Supported 00:17:10.895 Get Log Page Extended Data: Supported 00:17:10.895 Telemetry Log Pages: Not Supported 00:17:10.895 Persistent Event Log Pages: Not Supported 00:17:10.895 Supported Log Pages Log Page: May Support 00:17:10.895 Commands Supported & Effects Log Page: Not Supported 00:17:10.895 Feature Identifiers & Effects Log Page:May Support 00:17:10.895 NVMe-MI Commands & Effects Log Page: May Support 00:17:10.895 Data Area 4 for Telemetry Log: Not Supported 00:17:10.895 Error Log Page Entries Supported: 128 00:17:10.895 Keep Alive: Supported 00:17:10.895 Keep Alive Granularity: 10000 ms 00:17:10.895 00:17:10.895 NVM Command Set Attributes 00:17:10.895 ========================== 00:17:10.895 Submission Queue Entry Size 00:17:10.895 Max: 64 00:17:10.895 Min: 64 00:17:10.895 Completion Queue Entry Size 00:17:10.895 Max: 16 00:17:10.895 Min: 16 00:17:10.895 Number of Namespaces: 32 00:17:10.895 Compare Command: Supported 00:17:10.895 Write Uncorrectable Command: Not Supported 00:17:10.895 Dataset Management Command: Supported 00:17:10.895 Write Zeroes Command: Supported 00:17:10.895 Set Features Save Field: Not Supported 00:17:10.895 Reservations: Supported 00:17:10.895 Timestamp: Not Supported 00:17:10.895 Copy: Supported 00:17:10.895 Volatile Write Cache: Present 00:17:10.895 Atomic Write Unit (Normal): 1 00:17:10.895 Atomic Write Unit (PFail): 1 00:17:10.895 Atomic Compare & Write Unit: 1 00:17:10.895 Fused Compare & Write: Supported 00:17:10.895 Scatter-Gather List 00:17:10.895 SGL Command Set: Supported 00:17:10.895 SGL Keyed: Supported 00:17:10.895 SGL Bit Bucket Descriptor: Not Supported 00:17:10.895 SGL Metadata Pointer: Not Supported 00:17:10.895 Oversized SGL: Not Supported 00:17:10.895 SGL Metadata Address: Not Supported 00:17:10.895 SGL Offset: Supported 00:17:10.895 Transport SGL Data Block: Not Supported 00:17:10.895 Replay Protected Memory Block: Not Supported 00:17:10.895 00:17:10.895 Firmware Slot Information 00:17:10.895 ========================= 00:17:10.895 Active slot: 1 00:17:10.895 Slot 1 Firmware Revision: 24.05 00:17:10.895 00:17:10.895 00:17:10.895 Commands Supported and Effects 00:17:10.895 ============================== 00:17:10.895 Admin Commands 00:17:10.895 -------------- 00:17:10.895 Get Log Page (02h): Supported 00:17:10.895 Identify (06h): Supported 00:17:10.895 Abort (08h): Supported 00:17:10.895 Set Features (09h): Supported 00:17:10.895 Get Features (0Ah): Supported 00:17:10.895 Asynchronous Event Request (0Ch): Supported 00:17:10.895 Keep Alive (18h): Supported 00:17:10.895 I/O Commands 00:17:10.895 ------------ 00:17:10.895 Flush (00h): Supported LBA-Change 00:17:10.895 Write (01h): Supported LBA-Change 00:17:10.895 Read (02h): Supported 00:17:10.895 Compare (05h): Supported 00:17:10.895 Write Zeroes (08h): Supported LBA-Change 00:17:10.895 Dataset Management (09h): Supported LBA-Change 00:17:10.895 Copy (19h): Supported LBA-Change 00:17:10.895 Unknown (79h): Supported LBA-Change 00:17:10.895 Unknown (7Ah): Supported 00:17:10.895 00:17:10.895 Error Log 00:17:10.895 ========= 00:17:10.895 00:17:10.895 Arbitration 00:17:10.895 =========== 00:17:10.895 Arbitration Burst: 1 00:17:10.895 00:17:10.895 Power Management 00:17:10.895 ================ 00:17:10.895 Number of Power States: 1 00:17:10.895 Current Power State: Power State #0 00:17:10.895 Power State #0: 00:17:10.895 Max Power: 0.00 W 00:17:10.895 Non-Operational State: Operational 00:17:10.895 Entry Latency: Not Reported 00:17:10.895 Exit Latency: Not Reported 00:17:10.895 Relative Read Throughput: 0 00:17:10.895 Relative Read Latency: 0 00:17:10.895 Relative Write Throughput: 0 00:17:10.895 Relative Write Latency: 0 00:17:10.895 Idle Power: Not Reported 00:17:10.895 Active Power: Not Reported 00:17:10.895 Non-Operational Permissive Mode: Not Supported 00:17:10.895 00:17:10.895 Health Information 00:17:10.895 ================== 00:17:10.895 Critical Warnings: 00:17:10.895 Available Spare Space: OK 00:17:10.895 Temperature: OK 00:17:10.895 Device Reliability: OK 00:17:10.895 Read Only: No 00:17:10.895 Volatile Memory Backup: OK 00:17:10.895 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:10.895 Temperature Threshold: [2024-04-18 13:46:13.540524] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.895 [2024-04-18 13:46:13.540536] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1db35b0) 00:17:10.895 [2024-04-18 13:46:13.540547] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.895 [2024-04-18 13:46:13.540569] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13db0, cid 7, qid 0 00:17:10.895 [2024-04-18 13:46:13.540728] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.895 [2024-04-18 13:46:13.540742] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.895 [2024-04-18 13:46:13.540748] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.895 [2024-04-18 13:46:13.540755] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13db0) on tqpair=0x1db35b0 00:17:10.895 [2024-04-18 13:46:13.540795] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:10.895 [2024-04-18 13:46:13.540814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.895 [2024-04-18 13:46:13.540826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.895 [2024-04-18 13:46:13.540836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.895 [2024-04-18 13:46:13.540845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.895 [2024-04-18 13:46:13.540857] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.895 [2024-04-18 13:46:13.540864] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.895 [2024-04-18 13:46:13.540870] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.895 [2024-04-18 13:46:13.540881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.896 [2024-04-18 13:46:13.540916] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.896 [2024-04-18 13:46:13.541093] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.896 [2024-04-18 13:46:13.541108] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.896 [2024-04-18 13:46:13.541122] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.541128] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.896 [2024-04-18 13:46:13.541140] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.541147] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.541168] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.896 [2024-04-18 13:46:13.545186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.896 [2024-04-18 13:46:13.545221] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.896 [2024-04-18 13:46:13.545410] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.896 [2024-04-18 13:46:13.545425] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.896 [2024-04-18 13:46:13.545432] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.545438] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.896 [2024-04-18 13:46:13.545447] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:10.896 [2024-04-18 13:46:13.545454] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:10.896 [2024-04-18 13:46:13.545484] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.545493] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.545499] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.896 [2024-04-18 13:46:13.545509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.896 [2024-04-18 13:46:13.545529] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.896 [2024-04-18 13:46:13.545637] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.896 [2024-04-18 13:46:13.545648] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.896 [2024-04-18 13:46:13.545655] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.545661] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.896 [2024-04-18 13:46:13.545677] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.545686] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.545692] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.896 [2024-04-18 13:46:13.545701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.896 [2024-04-18 13:46:13.545720] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.896 [2024-04-18 13:46:13.545823] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.896 [2024-04-18 13:46:13.545837] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.896 [2024-04-18 13:46:13.545843] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.545849] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.896 [2024-04-18 13:46:13.545866] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.545875] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.545881] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.896 [2024-04-18 13:46:13.545894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.896 [2024-04-18 13:46:13.545914] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.896 [2024-04-18 13:46:13.546010] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.896 [2024-04-18 13:46:13.546021] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.896 [2024-04-18 13:46:13.546028] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.546034] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.896 [2024-04-18 13:46:13.546050] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.546059] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.546065] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.896 [2024-04-18 13:46:13.546075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.896 [2024-04-18 13:46:13.546093] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.896 [2024-04-18 13:46:13.546226] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.896 [2024-04-18 13:46:13.546240] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.896 [2024-04-18 13:46:13.546247] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.546254] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.896 [2024-04-18 13:46:13.546272] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.546281] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.546287] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.896 [2024-04-18 13:46:13.546298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.896 [2024-04-18 13:46:13.546318] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.896 [2024-04-18 13:46:13.546428] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.896 [2024-04-18 13:46:13.546443] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.896 [2024-04-18 13:46:13.546450] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.546457] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.896 [2024-04-18 13:46:13.546492] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.546501] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.546507] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.896 [2024-04-18 13:46:13.546518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.896 [2024-04-18 13:46:13.546552] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.896 [2024-04-18 13:46:13.546648] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.896 [2024-04-18 13:46:13.546659] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.896 [2024-04-18 13:46:13.546665] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.546671] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.896 [2024-04-18 13:46:13.546688] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.546697] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.896 [2024-04-18 13:46:13.546703] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.896 [2024-04-18 13:46:13.546715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.896 [2024-04-18 13:46:13.546736] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.896 [2024-04-18 13:46:13.546836] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.897 [2024-04-18 13:46:13.546850] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.897 [2024-04-18 13:46:13.546856] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.546862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.897 [2024-04-18 13:46:13.546879] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.546888] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.546894] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.897 [2024-04-18 13:46:13.546904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.897 [2024-04-18 13:46:13.546923] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.897 [2024-04-18 13:46:13.547024] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.897 [2024-04-18 13:46:13.547037] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.897 [2024-04-18 13:46:13.547044] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.547050] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.897 [2024-04-18 13:46:13.547067] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.547075] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.547081] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.897 [2024-04-18 13:46:13.547091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.897 [2024-04-18 13:46:13.547110] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.897 [2024-04-18 13:46:13.547242] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.897 [2024-04-18 13:46:13.547258] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.897 [2024-04-18 13:46:13.547264] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.547271] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.897 [2024-04-18 13:46:13.547289] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.547298] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.547304] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.897 [2024-04-18 13:46:13.547314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.897 [2024-04-18 13:46:13.547334] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.897 [2024-04-18 13:46:13.547435] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.897 [2024-04-18 13:46:13.547446] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.897 [2024-04-18 13:46:13.547453] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.547459] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.897 [2024-04-18 13:46:13.547492] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.547501] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.547506] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.897 [2024-04-18 13:46:13.547516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.897 [2024-04-18 13:46:13.547539] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.897 [2024-04-18 13:46:13.547646] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.897 [2024-04-18 13:46:13.547657] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.897 [2024-04-18 13:46:13.547664] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.547670] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.897 [2024-04-18 13:46:13.547687] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.547695] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.547701] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.897 [2024-04-18 13:46:13.547711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.897 [2024-04-18 13:46:13.547730] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.897 [2024-04-18 13:46:13.547833] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.897 [2024-04-18 13:46:13.547847] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.897 [2024-04-18 13:46:13.547854] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.547860] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.897 [2024-04-18 13:46:13.547877] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.547886] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.547892] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.897 [2024-04-18 13:46:13.547901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.897 [2024-04-18 13:46:13.547921] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.897 [2024-04-18 13:46:13.548023] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.897 [2024-04-18 13:46:13.548037] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.897 [2024-04-18 13:46:13.548043] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.548050] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.897 [2024-04-18 13:46:13.548066] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.548075] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.548081] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.897 [2024-04-18 13:46:13.548091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.897 [2024-04-18 13:46:13.548110] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.897 [2024-04-18 13:46:13.548233] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.897 [2024-04-18 13:46:13.548246] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.897 [2024-04-18 13:46:13.548253] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.548259] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.897 [2024-04-18 13:46:13.548276] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.548285] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.548291] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.897 [2024-04-18 13:46:13.548301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.897 [2024-04-18 13:46:13.548321] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.897 [2024-04-18 13:46:13.548423] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.897 [2024-04-18 13:46:13.548435] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.897 [2024-04-18 13:46:13.548441] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.548448] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.897 [2024-04-18 13:46:13.548480] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.548489] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.548495] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.897 [2024-04-18 13:46:13.548505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.897 [2024-04-18 13:46:13.548524] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.897 [2024-04-18 13:46:13.548629] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.897 [2024-04-18 13:46:13.548643] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.897 [2024-04-18 13:46:13.548650] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.548656] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.897 [2024-04-18 13:46:13.548673] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.548681] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.548687] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.897 [2024-04-18 13:46:13.548697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.897 [2024-04-18 13:46:13.548716] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.897 [2024-04-18 13:46:13.548814] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.897 [2024-04-18 13:46:13.548825] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.897 [2024-04-18 13:46:13.548832] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.548838] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.897 [2024-04-18 13:46:13.548854] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.548863] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.897 [2024-04-18 13:46:13.548869] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.897 [2024-04-18 13:46:13.548878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.898 [2024-04-18 13:46:13.548897] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.898 [2024-04-18 13:46:13.548991] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.898 [2024-04-18 13:46:13.549002] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.898 [2024-04-18 13:46:13.549009] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.898 [2024-04-18 13:46:13.549015] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.898 [2024-04-18 13:46:13.549031] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.898 [2024-04-18 13:46:13.549040] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.898 [2024-04-18 13:46:13.549046] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.898 [2024-04-18 13:46:13.549056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.898 [2024-04-18 13:46:13.549075] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.898 [2024-04-18 13:46:13.553221] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.898 [2024-04-18 13:46:13.553238] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.898 [2024-04-18 13:46:13.553246] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.898 [2024-04-18 13:46:13.553252] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.898 [2024-04-18 13:46:13.553272] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:10.898 [2024-04-18 13:46:13.553281] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:10.898 [2024-04-18 13:46:13.553287] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1db35b0) 00:17:10.898 [2024-04-18 13:46:13.553298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.898 [2024-04-18 13:46:13.553319] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e13830, cid 3, qid 0 00:17:10.898 [2024-04-18 13:46:13.553523] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:10.898 [2024-04-18 13:46:13.553538] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:10.898 [2024-04-18 13:46:13.553544] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:10.898 [2024-04-18 13:46:13.553551] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e13830) on tqpair=0x1db35b0 00:17:10.898 [2024-04-18 13:46:13.553574] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 8 milliseconds 00:17:10.898 0 Kelvin (-273 Celsius) 00:17:10.898 Available Spare: 0% 00:17:10.898 Available Spare Threshold: 0% 00:17:10.898 Life Percentage Used: 0% 00:17:10.898 Data Units Read: 0 00:17:10.898 Data Units Written: 0 00:17:10.898 Host Read Commands: 0 00:17:10.898 Host Write Commands: 0 00:17:10.898 Controller Busy Time: 0 minutes 00:17:10.898 Power Cycles: 0 00:17:10.898 Power On Hours: 0 hours 00:17:10.898 Unsafe Shutdowns: 0 00:17:10.898 Unrecoverable Media Errors: 0 00:17:10.898 Lifetime Error Log Entries: 0 00:17:10.898 Warning Temperature Time: 0 minutes 00:17:10.898 Critical Temperature Time: 0 minutes 00:17:10.898 00:17:10.898 Number of Queues 00:17:10.898 ================ 00:17:10.898 Number of I/O Submission Queues: 127 00:17:10.898 Number of I/O Completion Queues: 127 00:17:10.898 00:17:10.898 Active Namespaces 00:17:10.898 ================= 00:17:10.898 Namespace ID:1 00:17:10.898 Error Recovery Timeout: Unlimited 00:17:10.898 Command Set Identifier: NVM (00h) 00:17:10.898 Deallocate: Supported 00:17:10.898 Deallocated/Unwritten Error: Not Supported 00:17:10.898 Deallocated Read Value: Unknown 00:17:10.898 Deallocate in Write Zeroes: Not Supported 00:17:10.898 Deallocated Guard Field: 0xFFFF 00:17:10.898 Flush: Supported 00:17:10.898 Reservation: Supported 00:17:10.898 Namespace Sharing Capabilities: Multiple Controllers 00:17:10.898 Size (in LBAs): 131072 (0GiB) 00:17:10.898 Capacity (in LBAs): 131072 (0GiB) 00:17:10.898 Utilization (in LBAs): 131072 (0GiB) 00:17:10.898 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:10.898 EUI64: ABCDEF0123456789 00:17:10.898 UUID: 0bb614b4-7d6e-4f19-ab2f-a98e8876db15 00:17:10.898 Thin Provisioning: Not Supported 00:17:10.898 Per-NS Atomic Units: Yes 00:17:10.898 Atomic Boundary Size (Normal): 0 00:17:10.898 Atomic Boundary Size (PFail): 0 00:17:10.898 Atomic Boundary Offset: 0 00:17:10.898 Maximum Single Source Range Length: 65535 00:17:10.898 Maximum Copy Length: 65535 00:17:10.898 Maximum Source Range Count: 1 00:17:10.898 NGUID/EUI64 Never Reused: No 00:17:10.898 Namespace Write Protected: No 00:17:10.898 Number of LBA Formats: 1 00:17:10.898 Current LBA Format: LBA Format #00 00:17:10.898 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:10.898 00:17:10.898 13:46:13 -- host/identify.sh@51 -- # sync 00:17:10.898 13:46:13 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:10.898 13:46:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.898 13:46:13 -- common/autotest_common.sh@10 -- # set +x 00:17:10.898 13:46:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.898 13:46:13 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:10.898 13:46:13 -- host/identify.sh@56 -- # nvmftestfini 00:17:10.898 13:46:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:10.898 13:46:13 -- nvmf/common.sh@117 -- # sync 00:17:10.898 13:46:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:10.898 13:46:13 -- nvmf/common.sh@120 -- # set +e 00:17:10.898 13:46:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:10.898 13:46:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:10.898 rmmod nvme_tcp 00:17:10.898 rmmod nvme_fabrics 00:17:10.898 rmmod nvme_keyring 00:17:10.898 13:46:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:10.898 13:46:13 -- nvmf/common.sh@124 -- # set -e 00:17:10.898 13:46:13 -- nvmf/common.sh@125 -- # return 0 00:17:10.898 13:46:13 -- nvmf/common.sh@478 -- # '[' -n 2637839 ']' 00:17:10.898 13:46:13 -- nvmf/common.sh@479 -- # killprocess 2637839 00:17:10.898 13:46:13 -- common/autotest_common.sh@936 -- # '[' -z 2637839 ']' 00:17:10.898 13:46:13 -- common/autotest_common.sh@940 -- # kill -0 2637839 00:17:10.898 13:46:13 -- common/autotest_common.sh@941 -- # uname 00:17:10.898 13:46:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:10.898 13:46:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2637839 00:17:10.898 13:46:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:10.898 13:46:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:10.898 13:46:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2637839' 00:17:10.898 killing process with pid 2637839 00:17:10.898 13:46:13 -- common/autotest_common.sh@955 -- # kill 2637839 00:17:10.898 [2024-04-18 13:46:13.662319] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:10.898 13:46:13 -- common/autotest_common.sh@960 -- # wait 2637839 00:17:11.462 13:46:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:11.462 13:46:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:11.462 13:46:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:11.462 13:46:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.462 13:46:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:11.462 13:46:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.462 13:46:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.462 13:46:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.364 13:46:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:13.364 00:17:13.364 real 0m5.497s 00:17:13.364 user 0m4.684s 00:17:13.364 sys 0m1.815s 00:17:13.364 13:46:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:13.364 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:13.364 ************************************ 00:17:13.364 END TEST nvmf_identify 00:17:13.364 ************************************ 00:17:13.364 13:46:16 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:13.364 13:46:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:13.364 13:46:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:13.364 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:13.364 ************************************ 00:17:13.364 START TEST nvmf_perf 00:17:13.364 ************************************ 00:17:13.364 13:46:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:13.622 * Looking for test storage... 00:17:13.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:13.622 13:46:16 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.622 13:46:16 -- nvmf/common.sh@7 -- # uname -s 00:17:13.622 13:46:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.622 13:46:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.622 13:46:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.622 13:46:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.622 13:46:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.622 13:46:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.622 13:46:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.622 13:46:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.622 13:46:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.622 13:46:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.622 13:46:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.622 13:46:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.622 13:46:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.622 13:46:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.622 13:46:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:13.622 13:46:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.622 13:46:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:13.622 13:46:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.622 13:46:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.622 13:46:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.622 13:46:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.623 13:46:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.623 13:46:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.623 13:46:16 -- paths/export.sh@5 -- # export PATH 00:17:13.623 13:46:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.623 13:46:16 -- nvmf/common.sh@47 -- # : 0 00:17:13.623 13:46:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:13.623 13:46:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:13.623 13:46:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.623 13:46:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.623 13:46:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.623 13:46:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:13.623 13:46:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:13.623 13:46:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:13.623 13:46:16 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:13.623 13:46:16 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:13.623 13:46:16 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:13.623 13:46:16 -- host/perf.sh@17 -- # nvmftestinit 00:17:13.623 13:46:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:13.623 13:46:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.623 13:46:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:13.623 13:46:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:13.623 13:46:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:13.623 13:46:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.623 13:46:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.623 13:46:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.623 13:46:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:13.623 13:46:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:13.623 13:46:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:13.623 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:17:15.548 13:46:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:15.549 13:46:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:15.549 13:46:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:15.549 13:46:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:15.549 13:46:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:15.549 13:46:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:15.549 13:46:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:15.549 13:46:18 -- nvmf/common.sh@295 -- # net_devs=() 00:17:15.549 13:46:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:15.549 13:46:18 -- nvmf/common.sh@296 -- # e810=() 00:17:15.549 13:46:18 -- nvmf/common.sh@296 -- # local -ga e810 00:17:15.549 13:46:18 -- nvmf/common.sh@297 -- # x722=() 00:17:15.549 13:46:18 -- nvmf/common.sh@297 -- # local -ga x722 00:17:15.549 13:46:18 -- nvmf/common.sh@298 -- # mlx=() 00:17:15.549 13:46:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:15.549 13:46:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.549 13:46:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.549 13:46:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.549 13:46:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.549 13:46:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.549 13:46:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.549 13:46:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.549 13:46:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.549 13:46:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.549 13:46:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.549 13:46:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.549 13:46:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:15.549 13:46:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:15.549 13:46:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:15.549 13:46:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:15.549 13:46:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:15.549 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:15.549 13:46:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:15.549 13:46:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:15.549 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:15.549 13:46:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:15.549 13:46:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:15.549 13:46:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.549 13:46:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:15.549 13:46:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.549 13:46:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:15.549 Found net devices under 0000:84:00.0: cvl_0_0 00:17:15.549 13:46:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.549 13:46:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:15.549 13:46:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.549 13:46:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:15.549 13:46:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.549 13:46:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:15.549 Found net devices under 0000:84:00.1: cvl_0_1 00:17:15.549 13:46:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.549 13:46:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:15.549 13:46:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:15.549 13:46:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:15.549 13:46:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.549 13:46:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.549 13:46:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:15.549 13:46:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:15.549 13:46:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:15.549 13:46:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:15.549 13:46:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:15.549 13:46:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:15.549 13:46:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.549 13:46:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:15.549 13:46:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:15.549 13:46:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:15.549 13:46:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:15.549 13:46:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:15.549 13:46:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:15.549 13:46:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:15.549 13:46:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:15.549 13:46:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:15.549 13:46:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:15.549 13:46:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:15.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:17:15.549 00:17:15.549 --- 10.0.0.2 ping statistics --- 00:17:15.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.549 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:17:15.549 13:46:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:15.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:17:15.549 00:17:15.549 --- 10.0.0.1 ping statistics --- 00:17:15.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.549 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:17:15.549 13:46:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.549 13:46:18 -- nvmf/common.sh@411 -- # return 0 00:17:15.549 13:46:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:15.549 13:46:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.549 13:46:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:15.549 13:46:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.549 13:46:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:15.549 13:46:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:15.549 13:46:18 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:15.549 13:46:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:15.549 13:46:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:15.549 13:46:18 -- common/autotest_common.sh@10 -- # set +x 00:17:15.549 13:46:18 -- nvmf/common.sh@470 -- # nvmfpid=2639941 00:17:15.549 13:46:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:15.549 13:46:18 -- nvmf/common.sh@471 -- # waitforlisten 2639941 00:17:15.549 13:46:18 -- common/autotest_common.sh@817 -- # '[' -z 2639941 ']' 00:17:15.549 13:46:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.549 13:46:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:15.549 13:46:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.549 13:46:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:15.549 13:46:18 -- common/autotest_common.sh@10 -- # set +x 00:17:15.549 [2024-04-18 13:46:18.315566] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:15.549 [2024-04-18 13:46:18.315644] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.808 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.808 [2024-04-18 13:46:18.381731] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:15.808 [2024-04-18 13:46:18.496011] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.808 [2024-04-18 13:46:18.496087] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.808 [2024-04-18 13:46:18.496103] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.808 [2024-04-18 13:46:18.496117] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.808 [2024-04-18 13:46:18.496129] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.808 [2024-04-18 13:46:18.496199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.808 [2024-04-18 13:46:18.496245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.808 [2024-04-18 13:46:18.496408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:15.808 [2024-04-18 13:46:18.496412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.739 13:46:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:16.739 13:46:19 -- common/autotest_common.sh@850 -- # return 0 00:17:16.739 13:46:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:16.739 13:46:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:16.739 13:46:19 -- common/autotest_common.sh@10 -- # set +x 00:17:16.739 13:46:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.739 13:46:19 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:16.739 13:46:19 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:17:20.015 13:46:22 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:17:20.015 13:46:22 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:20.015 13:46:22 -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:17:20.015 13:46:22 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:20.272 13:46:22 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:20.272 13:46:22 -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:17:20.272 13:46:22 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:20.272 13:46:22 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:20.272 13:46:22 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:20.534 [2024-04-18 13:46:23.149703] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.534 13:46:23 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:20.790 13:46:23 -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:20.790 13:46:23 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:21.046 13:46:23 -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:21.046 13:46:23 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:21.303 13:46:23 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.303 [2024-04-18 13:46:24.109313] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.560 13:46:24 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:21.817 13:46:24 -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:17:21.817 13:46:24 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:17:21.817 13:46:24 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:21.817 13:46:24 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:17:23.186 Initializing NVMe Controllers 00:17:23.186 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:17:23.186 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:17:23.186 Initialization complete. Launching workers. 00:17:23.186 ======================================================== 00:17:23.186 Latency(us) 00:17:23.186 Device Information : IOPS MiB/s Average min max 00:17:23.187 PCIE (0000:82:00.0) NSID 1 from core 0: 85335.02 333.34 374.57 33.11 4360.64 00:17:23.187 ======================================================== 00:17:23.187 Total : 85335.02 333.34 374.57 33.11 4360.64 00:17:23.187 00:17:23.187 13:46:25 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:23.187 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.118 Initializing NVMe Controllers 00:17:24.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:24.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:24.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:24.118 Initialization complete. Launching workers. 00:17:24.118 ======================================================== 00:17:24.118 Latency(us) 00:17:24.118 Device Information : IOPS MiB/s Average min max 00:17:24.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 111.00 0.43 9015.53 166.15 45139.65 00:17:24.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 17137.19 7312.58 50861.04 00:17:24.118 ======================================================== 00:17:24.118 Total : 172.00 0.67 11895.89 166.15 50861.04 00:17:24.118 00:17:24.118 13:46:26 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:24.118 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.500 Initializing NVMe Controllers 00:17:25.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:25.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:25.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:25.500 Initialization complete. Launching workers. 00:17:25.500 ======================================================== 00:17:25.500 Latency(us) 00:17:25.500 Device Information : IOPS MiB/s Average min max 00:17:25.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8364.97 32.68 3843.49 525.86 7979.14 00:17:25.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3899.52 15.23 8241.01 6496.78 15769.25 00:17:25.500 ======================================================== 00:17:25.500 Total : 12264.49 47.91 5241.69 525.86 15769.25 00:17:25.500 00:17:25.500 13:46:28 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:17:25.500 13:46:28 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:17:25.500 13:46:28 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:25.500 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.034 Initializing NVMe Controllers 00:17:28.034 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:28.034 Controller IO queue size 128, less than required. 00:17:28.034 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:28.034 Controller IO queue size 128, less than required. 00:17:28.034 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:28.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:28.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:28.034 Initialization complete. Launching workers. 00:17:28.034 ======================================================== 00:17:28.034 Latency(us) 00:17:28.034 Device Information : IOPS MiB/s Average min max 00:17:28.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1073.85 268.46 122875.77 62772.10 250819.13 00:17:28.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 608.92 152.23 222203.33 118006.39 365533.25 00:17:28.034 ======================================================== 00:17:28.034 Total : 1682.77 420.69 158817.83 62772.10 365533.25 00:17:28.034 00:17:28.034 13:46:30 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:28.034 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.601 No valid NVMe controllers or AIO or URING devices found 00:17:28.601 Initializing NVMe Controllers 00:17:28.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:28.601 Controller IO queue size 128, less than required. 00:17:28.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:28.601 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:28.601 Controller IO queue size 128, less than required. 00:17:28.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:28.601 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:17:28.601 WARNING: Some requested NVMe devices were skipped 00:17:28.601 13:46:31 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:28.601 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.146 Initializing NVMe Controllers 00:17:31.146 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:31.146 Controller IO queue size 128, less than required. 00:17:31.146 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:31.146 Controller IO queue size 128, less than required. 00:17:31.146 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:31.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:31.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:31.146 Initialization complete. Launching workers. 00:17:31.146 00:17:31.146 ==================== 00:17:31.146 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:31.146 TCP transport: 00:17:31.146 polls: 22552 00:17:31.146 idle_polls: 10794 00:17:31.146 sock_completions: 11758 00:17:31.146 nvme_completions: 4193 00:17:31.146 submitted_requests: 6352 00:17:31.146 queued_requests: 1 00:17:31.146 00:17:31.146 ==================== 00:17:31.146 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:31.146 TCP transport: 00:17:31.146 polls: 20754 00:17:31.146 idle_polls: 8530 00:17:31.146 sock_completions: 12224 00:17:31.146 nvme_completions: 4357 00:17:31.146 submitted_requests: 6556 00:17:31.146 queued_requests: 1 00:17:31.146 ======================================================== 00:17:31.146 Latency(us) 00:17:31.146 Device Information : IOPS MiB/s Average min max 00:17:31.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1045.64 261.41 127887.35 77825.10 261824.60 00:17:31.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1086.55 271.64 122900.25 46972.49 226992.26 00:17:31.146 ======================================================== 00:17:31.146 Total : 2132.18 533.05 125345.96 46972.49 261824.60 00:17:31.146 00:17:31.146 13:46:33 -- host/perf.sh@66 -- # sync 00:17:31.146 13:46:33 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.713 13:46:34 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:31.713 13:46:34 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:31.713 13:46:34 -- host/perf.sh@114 -- # nvmftestfini 00:17:31.713 13:46:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:31.713 13:46:34 -- nvmf/common.sh@117 -- # sync 00:17:31.713 13:46:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:31.713 13:46:34 -- nvmf/common.sh@120 -- # set +e 00:17:31.713 13:46:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:31.713 13:46:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:31.713 rmmod nvme_tcp 00:17:31.713 rmmod nvme_fabrics 00:17:31.713 rmmod nvme_keyring 00:17:31.713 13:46:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:31.713 13:46:34 -- nvmf/common.sh@124 -- # set -e 00:17:31.713 13:46:34 -- nvmf/common.sh@125 -- # return 0 00:17:31.713 13:46:34 -- nvmf/common.sh@478 -- # '[' -n 2639941 ']' 00:17:31.713 13:46:34 -- nvmf/common.sh@479 -- # killprocess 2639941 00:17:31.713 13:46:34 -- common/autotest_common.sh@936 -- # '[' -z 2639941 ']' 00:17:31.713 13:46:34 -- common/autotest_common.sh@940 -- # kill -0 2639941 00:17:31.713 13:46:34 -- common/autotest_common.sh@941 -- # uname 00:17:31.713 13:46:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:31.713 13:46:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2639941 00:17:31.713 13:46:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:31.713 13:46:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:31.713 13:46:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2639941' 00:17:31.713 killing process with pid 2639941 00:17:31.713 13:46:34 -- common/autotest_common.sh@955 -- # kill 2639941 00:17:31.713 13:46:34 -- common/autotest_common.sh@960 -- # wait 2639941 00:17:33.655 13:46:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:33.655 13:46:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:33.655 13:46:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:33.655 13:46:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.655 13:46:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:33.655 13:46:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.655 13:46:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.655 13:46:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.567 13:46:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:35.567 00:17:35.567 real 0m21.892s 00:17:35.567 user 1m8.692s 00:17:35.567 sys 0m5.546s 00:17:35.567 13:46:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:35.567 13:46:38 -- common/autotest_common.sh@10 -- # set +x 00:17:35.567 ************************************ 00:17:35.567 END TEST nvmf_perf 00:17:35.567 ************************************ 00:17:35.567 13:46:38 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:35.567 13:46:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:35.567 13:46:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:35.567 13:46:38 -- common/autotest_common.sh@10 -- # set +x 00:17:35.567 ************************************ 00:17:35.567 START TEST nvmf_fio_host 00:17:35.567 ************************************ 00:17:35.567 13:46:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:35.567 * Looking for test storage... 00:17:35.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:35.567 13:46:38 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.567 13:46:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.567 13:46:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.567 13:46:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.567 13:46:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.567 13:46:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.567 13:46:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.567 13:46:38 -- paths/export.sh@5 -- # export PATH 00:17:35.567 13:46:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.567 13:46:38 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.567 13:46:38 -- nvmf/common.sh@7 -- # uname -s 00:17:35.567 13:46:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.567 13:46:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.567 13:46:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.567 13:46:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.567 13:46:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.567 13:46:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.567 13:46:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.567 13:46:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.567 13:46:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.567 13:46:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.567 13:46:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:35.567 13:46:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:35.567 13:46:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.567 13:46:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.567 13:46:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.567 13:46:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.567 13:46:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.567 13:46:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.567 13:46:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.567 13:46:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.567 13:46:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.567 13:46:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.568 13:46:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.568 13:46:38 -- paths/export.sh@5 -- # export PATH 00:17:35.568 13:46:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.568 13:46:38 -- nvmf/common.sh@47 -- # : 0 00:17:35.568 13:46:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.568 13:46:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.568 13:46:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.568 13:46:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.568 13:46:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.568 13:46:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.568 13:46:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.568 13:46:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.568 13:46:38 -- host/fio.sh@12 -- # nvmftestinit 00:17:35.568 13:46:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:35.568 13:46:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.568 13:46:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:35.568 13:46:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:35.568 13:46:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:35.568 13:46:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.568 13:46:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.568 13:46:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.568 13:46:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:35.568 13:46:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:35.568 13:46:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:35.568 13:46:38 -- common/autotest_common.sh@10 -- # set +x 00:17:38.105 13:46:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:38.105 13:46:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:38.105 13:46:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:38.105 13:46:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:38.105 13:46:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:38.105 13:46:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:38.105 13:46:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:38.105 13:46:40 -- nvmf/common.sh@295 -- # net_devs=() 00:17:38.105 13:46:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:38.105 13:46:40 -- nvmf/common.sh@296 -- # e810=() 00:17:38.105 13:46:40 -- nvmf/common.sh@296 -- # local -ga e810 00:17:38.105 13:46:40 -- nvmf/common.sh@297 -- # x722=() 00:17:38.105 13:46:40 -- nvmf/common.sh@297 -- # local -ga x722 00:17:38.105 13:46:40 -- nvmf/common.sh@298 -- # mlx=() 00:17:38.105 13:46:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:38.105 13:46:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.105 13:46:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.105 13:46:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.105 13:46:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.105 13:46:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.105 13:46:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.105 13:46:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.105 13:46:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.105 13:46:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.105 13:46:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.105 13:46:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.105 13:46:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:38.105 13:46:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:38.105 13:46:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:38.105 13:46:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.105 13:46:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:38.105 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:38.105 13:46:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.105 13:46:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:38.105 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:38.105 13:46:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:38.105 13:46:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.105 13:46:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.105 13:46:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:38.105 13:46:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.105 13:46:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:38.105 Found net devices under 0000:84:00.0: cvl_0_0 00:17:38.105 13:46:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.105 13:46:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.105 13:46:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.105 13:46:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:38.105 13:46:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.105 13:46:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:38.105 Found net devices under 0000:84:00.1: cvl_0_1 00:17:38.105 13:46:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.105 13:46:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:38.105 13:46:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:38.105 13:46:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:38.105 13:46:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:38.105 13:46:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.105 13:46:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.105 13:46:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.105 13:46:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:38.105 13:46:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.105 13:46:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.105 13:46:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:38.105 13:46:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.105 13:46:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.105 13:46:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:38.105 13:46:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:38.105 13:46:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.105 13:46:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.105 13:46:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.105 13:46:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.105 13:46:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:38.105 13:46:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.105 13:46:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.105 13:46:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.105 13:46:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:38.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:17:38.105 00:17:38.105 --- 10.0.0.2 ping statistics --- 00:17:38.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.105 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:17:38.106 13:46:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:17:38.106 00:17:38.106 --- 10.0.0.1 ping statistics --- 00:17:38.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.106 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:17:38.106 13:46:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.106 13:46:40 -- nvmf/common.sh@411 -- # return 0 00:17:38.106 13:46:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:38.106 13:46:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.106 13:46:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:38.106 13:46:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:38.106 13:46:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.106 13:46:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:38.106 13:46:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:38.106 13:46:40 -- host/fio.sh@14 -- # [[ y != y ]] 00:17:38.106 13:46:40 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:17:38.106 13:46:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:38.106 13:46:40 -- common/autotest_common.sh@10 -- # set +x 00:17:38.106 13:46:40 -- host/fio.sh@22 -- # nvmfpid=2644066 00:17:38.106 13:46:40 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:38.106 13:46:40 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:38.106 13:46:40 -- host/fio.sh@26 -- # waitforlisten 2644066 00:17:38.106 13:46:40 -- common/autotest_common.sh@817 -- # '[' -z 2644066 ']' 00:17:38.106 13:46:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.106 13:46:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:38.106 13:46:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.106 13:46:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:38.106 13:46:40 -- common/autotest_common.sh@10 -- # set +x 00:17:38.106 [2024-04-18 13:46:40.504437] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:38.106 [2024-04-18 13:46:40.504538] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.106 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.106 [2024-04-18 13:46:40.575545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:38.106 [2024-04-18 13:46:40.693023] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.106 [2024-04-18 13:46:40.693081] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.106 [2024-04-18 13:46:40.693096] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.106 [2024-04-18 13:46:40.693108] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.106 [2024-04-18 13:46:40.693119] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.106 [2024-04-18 13:46:40.693203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.106 [2024-04-18 13:46:40.693244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.106 [2024-04-18 13:46:40.693298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:38.106 [2024-04-18 13:46:40.693301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.673 13:46:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:38.673 13:46:41 -- common/autotest_common.sh@850 -- # return 0 00:17:38.673 13:46:41 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:38.673 13:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.673 13:46:41 -- common/autotest_common.sh@10 -- # set +x 00:17:38.673 [2024-04-18 13:46:41.449945] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.673 13:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.673 13:46:41 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:17:38.673 13:46:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:38.673 13:46:41 -- common/autotest_common.sh@10 -- # set +x 00:17:38.673 13:46:41 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:38.673 13:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.673 13:46:41 -- common/autotest_common.sh@10 -- # set +x 00:17:38.932 Malloc1 00:17:38.932 13:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.932 13:46:41 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:38.932 13:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.932 13:46:41 -- common/autotest_common.sh@10 -- # set +x 00:17:38.932 13:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.932 13:46:41 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:38.932 13:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.932 13:46:41 -- common/autotest_common.sh@10 -- # set +x 00:17:38.932 13:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.932 13:46:41 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.932 13:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.932 13:46:41 -- common/autotest_common.sh@10 -- # set +x 00:17:38.932 [2024-04-18 13:46:41.521473] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.932 13:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.932 13:46:41 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:38.932 13:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.932 13:46:41 -- common/autotest_common.sh@10 -- # set +x 00:17:38.932 13:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.932 13:46:41 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:17:38.932 13:46:41 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:38.932 13:46:41 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:38.932 13:46:41 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:17:38.932 13:46:41 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:38.932 13:46:41 -- common/autotest_common.sh@1325 -- # local sanitizers 00:17:38.932 13:46:41 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:38.932 13:46:41 -- common/autotest_common.sh@1327 -- # shift 00:17:38.932 13:46:41 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:17:38.932 13:46:41 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:38.932 13:46:41 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:38.932 13:46:41 -- common/autotest_common.sh@1331 -- # grep libasan 00:17:38.932 13:46:41 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:38.932 13:46:41 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:38.932 13:46:41 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:38.932 13:46:41 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:38.932 13:46:41 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:38.932 13:46:41 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:17:38.932 13:46:41 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:38.932 13:46:41 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:38.932 13:46:41 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:38.932 13:46:41 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:38.932 13:46:41 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:39.190 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:39.190 fio-3.35 00:17:39.190 Starting 1 thread 00:17:39.190 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.717 00:17:41.717 test: (groupid=0, jobs=1): err= 0: pid=2644298: Thu Apr 18 13:46:44 2024 00:17:41.717 read: IOPS=8881, BW=34.7MiB/s (36.4MB/s)(69.6MiB/2006msec) 00:17:41.717 slat (usec): min=2, max=255, avg= 3.47, stdev= 3.41 00:17:41.717 clat (usec): min=2547, max=13321, avg=7913.90, stdev=577.93 00:17:41.717 lat (usec): min=2569, max=13324, avg=7917.36, stdev=577.74 00:17:41.717 clat percentiles (usec): 00:17:41.717 | 1.00th=[ 6652], 5.00th=[ 6980], 10.00th=[ 7242], 20.00th=[ 7439], 00:17:41.717 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:17:41.717 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:17:41.717 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11207], 99.95th=[12256], 00:17:41.717 | 99.99th=[13042] 00:17:41.717 bw ( KiB/s): min=34880, max=36128, per=99.89%, avg=35488.00, stdev=524.72, samples=4 00:17:41.717 iops : min= 8720, max= 9032, avg=8872.00, stdev=131.18, samples=4 00:17:41.717 write: IOPS=8893, BW=34.7MiB/s (36.4MB/s)(69.7MiB/2006msec); 0 zone resets 00:17:41.717 slat (usec): min=2, max=206, avg= 3.64, stdev= 2.73 00:17:41.717 clat (usec): min=1736, max=12226, avg=6425.34, stdev=503.29 00:17:41.717 lat (usec): min=1745, max=12229, avg=6428.98, stdev=503.22 00:17:41.717 clat percentiles (usec): 00:17:41.717 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:17:41.717 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6521], 00:17:41.717 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7177], 00:17:41.717 | 99.00th=[ 7504], 99.50th=[ 7635], 99.90th=[10290], 99.95th=[11338], 00:17:41.717 | 99.99th=[12125] 00:17:41.717 bw ( KiB/s): min=34880, max=36224, per=100.00%, avg=35574.00, stdev=561.04, samples=4 00:17:41.717 iops : min= 8720, max= 9056, avg=8893.50, stdev=140.26, samples=4 00:17:41.717 lat (msec) : 2=0.01%, 4=0.10%, 10=99.75%, 20=0.14% 00:17:41.717 cpu : usr=67.18%, sys=29.33%, ctx=61, majf=0, minf=38 00:17:41.717 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:41.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:41.717 issued rwts: total=17817,17841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:41.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:41.718 00:17:41.718 Run status group 0 (all jobs): 00:17:41.718 READ: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.6MiB (73.0MB), run=2006-2006msec 00:17:41.718 WRITE: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.7MiB (73.1MB), run=2006-2006msec 00:17:41.718 13:46:44 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:41.718 13:46:44 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:41.718 13:46:44 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:17:41.718 13:46:44 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:41.718 13:46:44 -- common/autotest_common.sh@1325 -- # local sanitizers 00:17:41.718 13:46:44 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:41.718 13:46:44 -- common/autotest_common.sh@1327 -- # shift 00:17:41.718 13:46:44 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:17:41.718 13:46:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:41.718 13:46:44 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:41.718 13:46:44 -- common/autotest_common.sh@1331 -- # grep libasan 00:17:41.718 13:46:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:41.718 13:46:44 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:41.718 13:46:44 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:41.718 13:46:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:41.718 13:46:44 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:41.718 13:46:44 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:17:41.718 13:46:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:41.718 13:46:44 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:41.718 13:46:44 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:41.718 13:46:44 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:41.718 13:46:44 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:41.718 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:41.718 fio-3.35 00:17:41.718 Starting 1 thread 00:17:41.718 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.248 00:17:44.248 test: (groupid=0, jobs=1): err= 0: pid=2644710: Thu Apr 18 13:46:46 2024 00:17:44.248 read: IOPS=7888, BW=123MiB/s (129MB/s)(247MiB/2007msec) 00:17:44.248 slat (usec): min=3, max=252, avg= 4.16, stdev= 3.52 00:17:44.248 clat (usec): min=3317, max=20555, avg=9705.26, stdev=2558.29 00:17:44.248 lat (usec): min=3321, max=20577, avg=9709.42, stdev=2558.40 00:17:44.248 clat percentiles (usec): 00:17:44.248 | 1.00th=[ 4883], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7570], 00:17:44.248 | 30.00th=[ 8291], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10159], 00:17:44.248 | 70.00th=[10683], 80.00th=[11600], 90.00th=[12780], 95.00th=[14091], 00:17:44.248 | 99.00th=[17957], 99.50th=[18744], 99.90th=[19268], 99.95th=[19792], 00:17:44.248 | 99.99th=[20317] 00:17:44.248 bw ( KiB/s): min=54432, max=73312, per=51.05%, avg=64432.00, stdev=9079.64, samples=4 00:17:44.248 iops : min= 3402, max= 4582, avg=4027.00, stdev=567.48, samples=4 00:17:44.248 write: IOPS=4545, BW=71.0MiB/s (74.5MB/s)(132MiB/1854msec); 0 zone resets 00:17:44.248 slat (usec): min=30, max=175, avg=37.78, stdev= 6.33 00:17:44.248 clat (usec): min=7010, max=19621, avg=11631.47, stdev=1978.74 00:17:44.248 lat (usec): min=7045, max=19656, avg=11669.25, stdev=1978.50 00:17:44.248 clat percentiles (usec): 00:17:44.248 | 1.00th=[ 7570], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10028], 00:17:44.248 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11338], 60.00th=[11863], 00:17:44.248 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14484], 95.00th=[15270], 00:17:44.248 | 99.00th=[16450], 99.50th=[17171], 99.90th=[17957], 99.95th=[18482], 00:17:44.248 | 99.99th=[19530] 00:17:44.248 bw ( KiB/s): min=57696, max=75776, per=92.24%, avg=67088.00, stdev=9060.33, samples=4 00:17:44.248 iops : min= 3606, max= 4736, avg=4193.00, stdev=566.27, samples=4 00:17:44.248 lat (msec) : 4=0.07%, 10=44.88%, 20=55.04%, 50=0.02% 00:17:44.248 cpu : usr=74.53%, sys=22.38%, ctx=33, majf=0, minf=54 00:17:44.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:17:44.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:44.248 issued rwts: total=15832,8428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:44.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:44.248 00:17:44.248 Run status group 0 (all jobs): 00:17:44.248 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=247MiB (259MB), run=2007-2007msec 00:17:44.248 WRITE: bw=71.0MiB/s (74.5MB/s), 71.0MiB/s-71.0MiB/s (74.5MB/s-74.5MB/s), io=132MiB (138MB), run=1854-1854msec 00:17:44.248 13:46:46 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:44.248 13:46:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:44.248 13:46:46 -- common/autotest_common.sh@10 -- # set +x 00:17:44.248 13:46:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:44.248 13:46:46 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:17:44.248 13:46:46 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:17:44.248 13:46:46 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:17:44.248 13:46:46 -- host/fio.sh@84 -- # nvmftestfini 00:17:44.248 13:46:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:44.248 13:46:46 -- nvmf/common.sh@117 -- # sync 00:17:44.248 13:46:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:44.248 13:46:46 -- nvmf/common.sh@120 -- # set +e 00:17:44.248 13:46:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:44.248 13:46:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:44.248 rmmod nvme_tcp 00:17:44.248 rmmod nvme_fabrics 00:17:44.248 rmmod nvme_keyring 00:17:44.248 13:46:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:44.248 13:46:46 -- nvmf/common.sh@124 -- # set -e 00:17:44.248 13:46:46 -- nvmf/common.sh@125 -- # return 0 00:17:44.248 13:46:46 -- nvmf/common.sh@478 -- # '[' -n 2644066 ']' 00:17:44.248 13:46:46 -- nvmf/common.sh@479 -- # killprocess 2644066 00:17:44.248 13:46:46 -- common/autotest_common.sh@936 -- # '[' -z 2644066 ']' 00:17:44.248 13:46:46 -- common/autotest_common.sh@940 -- # kill -0 2644066 00:17:44.248 13:46:46 -- common/autotest_common.sh@941 -- # uname 00:17:44.248 13:46:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:44.248 13:46:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2644066 00:17:44.248 13:46:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:44.248 13:46:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:44.248 13:46:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2644066' 00:17:44.248 killing process with pid 2644066 00:17:44.248 13:46:46 -- common/autotest_common.sh@955 -- # kill 2644066 00:17:44.248 13:46:46 -- common/autotest_common.sh@960 -- # wait 2644066 00:17:44.508 13:46:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:44.508 13:46:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:44.508 13:46:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:44.508 13:46:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:44.508 13:46:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:44.508 13:46:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.508 13:46:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.508 13:46:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.412 13:46:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:46.412 00:17:46.412 real 0m11.023s 00:17:46.412 user 0m29.989s 00:17:46.412 sys 0m3.697s 00:17:46.412 13:46:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:46.412 13:46:49 -- common/autotest_common.sh@10 -- # set +x 00:17:46.412 ************************************ 00:17:46.412 END TEST nvmf_fio_host 00:17:46.412 ************************************ 00:17:46.412 13:46:49 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:46.412 13:46:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:46.412 13:46:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:46.412 13:46:49 -- common/autotest_common.sh@10 -- # set +x 00:17:46.671 ************************************ 00:17:46.671 START TEST nvmf_failover 00:17:46.671 ************************************ 00:17:46.671 13:46:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:46.671 * Looking for test storage... 00:17:46.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:46.671 13:46:49 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:46.671 13:46:49 -- nvmf/common.sh@7 -- # uname -s 00:17:46.671 13:46:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.671 13:46:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.671 13:46:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.671 13:46:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.671 13:46:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.671 13:46:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.671 13:46:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.671 13:46:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.671 13:46:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.671 13:46:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.671 13:46:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:46.671 13:46:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:46.671 13:46:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.671 13:46:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.671 13:46:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:46.671 13:46:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.671 13:46:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:46.671 13:46:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.671 13:46:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.671 13:46:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.671 13:46:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.671 13:46:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.671 13:46:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.671 13:46:49 -- paths/export.sh@5 -- # export PATH 00:17:46.671 13:46:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.671 13:46:49 -- nvmf/common.sh@47 -- # : 0 00:17:46.671 13:46:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:46.671 13:46:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:46.671 13:46:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.671 13:46:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.671 13:46:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.671 13:46:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:46.671 13:46:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:46.671 13:46:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:46.671 13:46:49 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:46.671 13:46:49 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:46.671 13:46:49 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:46.671 13:46:49 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:46.671 13:46:49 -- host/failover.sh@18 -- # nvmftestinit 00:17:46.671 13:46:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:46.671 13:46:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.671 13:46:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:46.672 13:46:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:46.672 13:46:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:46.672 13:46:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.672 13:46:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.672 13:46:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.672 13:46:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:46.672 13:46:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:46.672 13:46:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:46.672 13:46:49 -- common/autotest_common.sh@10 -- # set +x 00:17:48.572 13:46:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:48.572 13:46:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:48.572 13:46:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:48.572 13:46:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:48.572 13:46:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:48.572 13:46:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:48.572 13:46:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:48.572 13:46:51 -- nvmf/common.sh@295 -- # net_devs=() 00:17:48.572 13:46:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:48.572 13:46:51 -- nvmf/common.sh@296 -- # e810=() 00:17:48.572 13:46:51 -- nvmf/common.sh@296 -- # local -ga e810 00:17:48.572 13:46:51 -- nvmf/common.sh@297 -- # x722=() 00:17:48.572 13:46:51 -- nvmf/common.sh@297 -- # local -ga x722 00:17:48.572 13:46:51 -- nvmf/common.sh@298 -- # mlx=() 00:17:48.572 13:46:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:48.572 13:46:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.572 13:46:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.572 13:46:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.572 13:46:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.572 13:46:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.572 13:46:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.572 13:46:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.572 13:46:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.572 13:46:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.572 13:46:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.572 13:46:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.572 13:46:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:48.572 13:46:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:48.572 13:46:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:48.572 13:46:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:48.572 13:46:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:48.572 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:48.572 13:46:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:48.572 13:46:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:48.572 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:48.572 13:46:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:48.572 13:46:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:48.572 13:46:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.572 13:46:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:48.572 13:46:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.572 13:46:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:48.572 Found net devices under 0000:84:00.0: cvl_0_0 00:17:48.572 13:46:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.572 13:46:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:48.572 13:46:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.572 13:46:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:48.572 13:46:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.572 13:46:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:48.572 Found net devices under 0000:84:00.1: cvl_0_1 00:17:48.572 13:46:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.572 13:46:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:48.572 13:46:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:48.572 13:46:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:48.572 13:46:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:48.572 13:46:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.572 13:46:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.572 13:46:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.572 13:46:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:48.572 13:46:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.572 13:46:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.572 13:46:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:48.572 13:46:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.572 13:46:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.572 13:46:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:48.572 13:46:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:48.572 13:46:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.572 13:46:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:48.572 13:46:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:48.572 13:46:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:48.572 13:46:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:48.572 13:46:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:48.572 13:46:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:48.572 13:46:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:48.572 13:46:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:48.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:17:48.572 00:17:48.572 --- 10.0.0.2 ping statistics --- 00:17:48.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.572 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:17:48.572 13:46:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:48.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:17:48.831 00:17:48.831 --- 10.0.0.1 ping statistics --- 00:17:48.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.831 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:17:48.831 13:46:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.831 13:46:51 -- nvmf/common.sh@411 -- # return 0 00:17:48.831 13:46:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:48.831 13:46:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.831 13:46:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:48.831 13:46:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:48.831 13:46:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.831 13:46:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:48.831 13:46:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:48.831 13:46:51 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:48.831 13:46:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:48.831 13:46:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:48.831 13:46:51 -- common/autotest_common.sh@10 -- # set +x 00:17:48.831 13:46:51 -- nvmf/common.sh@470 -- # nvmfpid=2646918 00:17:48.831 13:46:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:48.831 13:46:51 -- nvmf/common.sh@471 -- # waitforlisten 2646918 00:17:48.831 13:46:51 -- common/autotest_common.sh@817 -- # '[' -z 2646918 ']' 00:17:48.831 13:46:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.831 13:46:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:48.831 13:46:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.831 13:46:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:48.831 13:46:51 -- common/autotest_common.sh@10 -- # set +x 00:17:48.831 [2024-04-18 13:46:51.445231] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:17:48.831 [2024-04-18 13:46:51.445304] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.831 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.831 [2024-04-18 13:46:51.509070] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:48.831 [2024-04-18 13:46:51.614781] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.831 [2024-04-18 13:46:51.614836] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.831 [2024-04-18 13:46:51.614850] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.831 [2024-04-18 13:46:51.614862] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.831 [2024-04-18 13:46:51.614872] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.831 [2024-04-18 13:46:51.615024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.831 [2024-04-18 13:46:51.615068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.831 [2024-04-18 13:46:51.615071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.089 13:46:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:49.089 13:46:51 -- common/autotest_common.sh@850 -- # return 0 00:17:49.089 13:46:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:49.089 13:46:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:49.089 13:46:51 -- common/autotest_common.sh@10 -- # set +x 00:17:49.089 13:46:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.089 13:46:51 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:49.346 [2024-04-18 13:46:51.973397] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.346 13:46:51 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:49.604 Malloc0 00:17:49.604 13:46:52 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:49.865 13:46:52 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:50.140 13:46:52 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.398 [2024-04-18 13:46:53.082669] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.398 13:46:53 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:50.655 [2024-04-18 13:46:53.319405] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:50.655 13:46:53 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:50.913 [2024-04-18 13:46:53.604242] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:50.913 13:46:53 -- host/failover.sh@31 -- # bdevperf_pid=2647184 00:17:50.913 13:46:53 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:50.913 13:46:53 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:50.913 13:46:53 -- host/failover.sh@34 -- # waitforlisten 2647184 /var/tmp/bdevperf.sock 00:17:50.913 13:46:53 -- common/autotest_common.sh@817 -- # '[' -z 2647184 ']' 00:17:50.913 13:46:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.913 13:46:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:50.913 13:46:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.913 13:46:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:50.913 13:46:53 -- common/autotest_common.sh@10 -- # set +x 00:17:51.172 13:46:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:51.172 13:46:53 -- common/autotest_common.sh@850 -- # return 0 00:17:51.172 13:46:53 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:51.736 NVMe0n1 00:17:51.736 13:46:54 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:52.302 00:17:52.302 13:46:54 -- host/failover.sh@39 -- # run_test_pid=2647394 00:17:52.302 13:46:54 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:52.302 13:46:54 -- host/failover.sh@41 -- # sleep 1 00:17:53.235 13:46:55 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.493 [2024-04-18 13:46:56.137526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137889] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.137989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.138000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.138011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.138025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.138037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.138049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.138060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.138072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.138083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.138095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.138106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.138117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.138128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.493 [2024-04-18 13:46:56.138139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.494 [2024-04-18 13:46:56.138151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.494 [2024-04-18 13:46:56.138187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.494 [2024-04-18 13:46:56.138201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.494 [2024-04-18 13:46:56.138212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.494 [2024-04-18 13:46:56.138224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.494 [2024-04-18 13:46:56.138236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.494 [2024-04-18 13:46:56.138248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.494 [2024-04-18 13:46:56.138259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.494 [2024-04-18 13:46:56.138271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.494 [2024-04-18 13:46:56.138283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.494 [2024-04-18 13:46:56.138295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.494 [2024-04-18 13:46:56.138306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.494 [2024-04-18 13:46:56.138318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.494 [2024-04-18 13:46:56.138330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d290 is same with the state(5) to be set 00:17:53.494 13:46:56 -- host/failover.sh@45 -- # sleep 3 00:17:57.036 13:46:59 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:57.036 00:17:57.036 13:46:59 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:57.295 [2024-04-18 13:46:59.854578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854669] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854735] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.854992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.855004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.855015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.855026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.855038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.855049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.855060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.855071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.855084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.855096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.855106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.855118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.855129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.855140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.295 [2024-04-18 13:46:59.855152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.296 [2024-04-18 13:46:59.855162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.296 [2024-04-18 13:46:59.855174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.296 [2024-04-18 13:46:59.855194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.296 [2024-04-18 13:46:59.855206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:17:57.296 13:46:59 -- host/failover.sh@50 -- # sleep 3 00:18:00.576 13:47:02 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.576 [2024-04-18 13:47:03.141106] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.576 13:47:03 -- host/failover.sh@55 -- # sleep 1 00:18:01.509 13:47:04 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:01.767 [2024-04-18 13:47:04.436521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436719] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436921] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.767 [2024-04-18 13:47:04.436956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.436967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.436982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.436995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437082] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437615] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 [2024-04-18 13:47:04.437780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125f070 is same with the state(5) to be set 00:18:01.768 13:47:04 -- host/failover.sh@59 -- # wait 2647394 00:18:08.331 0 00:18:08.331 13:47:10 -- host/failover.sh@61 -- # killprocess 2647184 00:18:08.331 13:47:10 -- common/autotest_common.sh@936 -- # '[' -z 2647184 ']' 00:18:08.331 13:47:10 -- common/autotest_common.sh@940 -- # kill -0 2647184 00:18:08.331 13:47:10 -- common/autotest_common.sh@941 -- # uname 00:18:08.331 13:47:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:08.331 13:47:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2647184 00:18:08.331 13:47:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:08.331 13:47:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:08.332 13:47:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2647184' 00:18:08.332 killing process with pid 2647184 00:18:08.332 13:47:10 -- common/autotest_common.sh@955 -- # kill 2647184 00:18:08.332 13:47:10 -- common/autotest_common.sh@960 -- # wait 2647184 00:18:08.332 13:47:10 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:08.332 [2024-04-18 13:46:53.667715] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:18:08.332 [2024-04-18 13:46:53.667811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2647184 ] 00:18:08.332 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.332 [2024-04-18 13:46:53.730785] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.332 [2024-04-18 13:46:53.845703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.332 Running I/O for 15 seconds... 00:18:08.332 [2024-04-18 13:46:56.139412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.332 [2024-04-18 13:46:56.139455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.139513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.139543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.139572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.139600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.139628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.139655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.139700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.139729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.139759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.139789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.139831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.139859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.139887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.139915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.139943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.139971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.139986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.140015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.140043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.140070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.140099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.140126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.332 [2024-04-18 13:46:56.140167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.332 [2024-04-18 13:46:56.140211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.332 [2024-04-18 13:46:56.140239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.332 [2024-04-18 13:46:56.140266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.332 [2024-04-18 13:46:56.140294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.332 [2024-04-18 13:46:56.140321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.332 [2024-04-18 13:46:56.140348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.332 [2024-04-18 13:46:56.140375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.140403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.140430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.140472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.140499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.332 [2024-04-18 13:46:56.140542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.332 [2024-04-18 13:46:56.140556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.140569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.140587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.140601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.140616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.140630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.140644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.140657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.140671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.140685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.140699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.140712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.140726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.140740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.140754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.140767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.140781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.140794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.140808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.140821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.140836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.140849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.140864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.140878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.140892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.140905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.140920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.140933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.140951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.140965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.140980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.140993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.333 [2024-04-18 13:46:56.141367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.333 [2024-04-18 13:46:56.141397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.333 [2024-04-18 13:46:56.141425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.333 [2024-04-18 13:46:56.141454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.333 [2024-04-18 13:46:56.141483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.333 [2024-04-18 13:46:56.141527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.333 [2024-04-18 13:46:56.141689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.333 [2024-04-18 13:46:56.141703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.141721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.141735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.141749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.141762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.141777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.141791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.141805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.141819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.141833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.141846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.141861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.141874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.141888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.141901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.141915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.141929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.141943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.141956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.141971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.141984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.141998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.142012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.142048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.142081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.142109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.142137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.142164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.142217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.142245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.142274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.142303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.142332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.142360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.142389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.142417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.142445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.334 [2024-04-18 13:46:56.142477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.334 [2024-04-18 13:46:56.142540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80736 len:8 PRP1 0x0 PRP2 0x0 00:18:08.334 [2024-04-18 13:46:56.142554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.334 [2024-04-18 13:46:56.142584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.334 [2024-04-18 13:46:56.142595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80744 len:8 PRP1 0x0 PRP2 0x0 00:18:08.334 [2024-04-18 13:46:56.142607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.334 [2024-04-18 13:46:56.142631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.334 [2024-04-18 13:46:56.142642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80752 len:8 PRP1 0x0 PRP2 0x0 00:18:08.334 [2024-04-18 13:46:56.142654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.334 [2024-04-18 13:46:56.142677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.334 [2024-04-18 13:46:56.142688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80760 len:8 PRP1 0x0 PRP2 0x0 00:18:08.334 [2024-04-18 13:46:56.142700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.334 [2024-04-18 13:46:56.142723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.334 [2024-04-18 13:46:56.142734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80768 len:8 PRP1 0x0 PRP2 0x0 00:18:08.334 [2024-04-18 13:46:56.142746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.334 [2024-04-18 13:46:56.142769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.334 [2024-04-18 13:46:56.142779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80776 len:8 PRP1 0x0 PRP2 0x0 00:18:08.334 [2024-04-18 13:46:56.142791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.334 [2024-04-18 13:46:56.142814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.334 [2024-04-18 13:46:56.142825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80784 len:8 PRP1 0x0 PRP2 0x0 00:18:08.334 [2024-04-18 13:46:56.142837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.334 [2024-04-18 13:46:56.142850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.334 [2024-04-18 13:46:56.142860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.334 [2024-04-18 13:46:56.142875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80792 len:8 PRP1 0x0 PRP2 0x0 00:18:08.334 [2024-04-18 13:46:56.142888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.142901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.142911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.142922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80800 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.142935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.142947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.142958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.142968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80808 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.142980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.142993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80816 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80824 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80832 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80840 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80848 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80856 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80864 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80872 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80880 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80888 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80896 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80000 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80008 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80016 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80024 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80032 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80040 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.335 [2024-04-18 13:46:56.143869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.335 [2024-04-18 13:46:56.143880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80048 len:8 PRP1 0x0 PRP2 0x0 00:18:08.335 [2024-04-18 13:46:56.143892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.143950] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaee380 was disconnected and freed. reset controller. 00:18:08.335 [2024-04-18 13:46:56.143969] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:08.335 [2024-04-18 13:46:56.144002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.335 [2024-04-18 13:46:56.144029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.144043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.335 [2024-04-18 13:46:56.144056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.144069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.335 [2024-04-18 13:46:56.144092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.144105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.335 [2024-04-18 13:46:56.144122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.335 [2024-04-18 13:46:56.144135] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:08.336 [2024-04-18 13:46:56.144216] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf8a0 (9): Bad file descriptor 00:18:08.336 [2024-04-18 13:46:56.147453] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:08.336 [2024-04-18 13:46:56.269881] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:08.336 [2024-04-18 13:46:59.855858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.855899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.855925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.855940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.855956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.855969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.855984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.855997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.336 [2024-04-18 13:46:59.856797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.336 [2024-04-18 13:46:59.856810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.856825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.856838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.856853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.856866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.856880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.856903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.856918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.856932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.856946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.856960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.856975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.856995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.857023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.857051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.857079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.857107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.857134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.857171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.857227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.857256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.857284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.857313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.337 [2024-04-18 13:46:59.857342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.857966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.337 [2024-04-18 13:46:59.857980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.337 [2024-04-18 13:46:59.858005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.338 [2024-04-18 13:46:59.858803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.338 [2024-04-18 13:46:59.858866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108520 len:8 PRP1 0x0 PRP2 0x0 00:18:08.338 [2024-04-18 13:46:59.858879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.338 [2024-04-18 13:46:59.858911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.338 [2024-04-18 13:46:59.858922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108528 len:8 PRP1 0x0 PRP2 0x0 00:18:08.338 [2024-04-18 13:46:59.858935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.858948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.338 [2024-04-18 13:46:59.858959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.338 [2024-04-18 13:46:59.858974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108536 len:8 PRP1 0x0 PRP2 0x0 00:18:08.338 [2024-04-18 13:46:59.858987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.859001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.338 [2024-04-18 13:46:59.859012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.338 [2024-04-18 13:46:59.859023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108544 len:8 PRP1 0x0 PRP2 0x0 00:18:08.338 [2024-04-18 13:46:59.859039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.859051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.338 [2024-04-18 13:46:59.859062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.338 [2024-04-18 13:46:59.859073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108552 len:8 PRP1 0x0 PRP2 0x0 00:18:08.338 [2024-04-18 13:46:59.859086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.859099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.338 [2024-04-18 13:46:59.859109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.338 [2024-04-18 13:46:59.859120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108560 len:8 PRP1 0x0 PRP2 0x0 00:18:08.338 [2024-04-18 13:46:59.859132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.859145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.338 [2024-04-18 13:46:59.859170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.338 [2024-04-18 13:46:59.859191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108568 len:8 PRP1 0x0 PRP2 0x0 00:18:08.338 [2024-04-18 13:46:59.859205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.338 [2024-04-18 13:46:59.859219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.338 [2024-04-18 13:46:59.859231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.859242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108576 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.859255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.859268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.859281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.859292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108584 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.859305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.859319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.859341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.859353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108592 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.859366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.859383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.859395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.859407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108600 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.859420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.859433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.859444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.859481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108608 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.859495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.859509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.859520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.859530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108616 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.859542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.859555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.859565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.859576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108624 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.859589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.859602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.859613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.859624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108632 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.859636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.859650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.859660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.859671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108640 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.859684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.859697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.859707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.859718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108648 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.859730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.859743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.859754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.859765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108656 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.859781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.859795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.859806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.859817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108664 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.859830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.859843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.859854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.859865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108672 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.859877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.859890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.859901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.859912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108680 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.859925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.859938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.859949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.859960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108688 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.859972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.859985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.859996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.860007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108696 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.860019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.860032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.860043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.860054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108704 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.860066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.860079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.860090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.860101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108712 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.860113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.860126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.860137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.860151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108720 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.860185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.860201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.860212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.860223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108728 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.860236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.860249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.860261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.860273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108736 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.860285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.860299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.860310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.860321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108744 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.860334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.860347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.860358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.339 [2024-04-18 13:46:59.860370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108752 len:8 PRP1 0x0 PRP2 0x0 00:18:08.339 [2024-04-18 13:46:59.860382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.339 [2024-04-18 13:46:59.860397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.339 [2024-04-18 13:46:59.860408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.340 [2024-04-18 13:46:59.860420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108760 len:8 PRP1 0x0 PRP2 0x0 00:18:08.340 [2024-04-18 13:46:59.860432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:46:59.860446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.340 [2024-04-18 13:46:59.860481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.340 [2024-04-18 13:46:59.860492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108120 len:8 PRP1 0x0 PRP2 0x0 00:18:08.340 [2024-04-18 13:46:59.860504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:46:59.860517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.340 [2024-04-18 13:46:59.860528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.340 [2024-04-18 13:46:59.860539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108128 len:8 PRP1 0x0 PRP2 0x0 00:18:08.340 [2024-04-18 13:46:59.860551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:46:59.860615] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xadbd90 was disconnected and freed. reset controller. 00:18:08.340 [2024-04-18 13:46:59.860637] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:18:08.340 [2024-04-18 13:46:59.860671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.340 [2024-04-18 13:46:59.860693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:46:59.860708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.340 [2024-04-18 13:46:59.860722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:46:59.860735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.340 [2024-04-18 13:46:59.860753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:46:59.860766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.340 [2024-04-18 13:46:59.860779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:46:59.860792] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:08.340 [2024-04-18 13:46:59.860845] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf8a0 (9): Bad file descriptor 00:18:08.340 [2024-04-18 13:46:59.864051] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:08.340 [2024-04-18 13:46:59.900193] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:08.340 [2024-04-18 13:47:04.436515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.340 [2024-04-18 13:47:04.436589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.436606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.340 [2024-04-18 13:47:04.436620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.436634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.340 [2024-04-18 13:47:04.436647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.436660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.340 [2024-04-18 13:47:04.436674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.436687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf8a0 is same with the state(5) to be set 00:18:08.340 [2024-04-18 13:47:04.439209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.340 [2024-04-18 13:47:04.439914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.340 [2024-04-18 13:47:04.439927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.439942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.341 [2024-04-18 13:47:04.439955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.439969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.341 [2024-04-18 13:47:04.439982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.439997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.341 [2024-04-18 13:47:04.440010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.341 [2024-04-18 13:47:04.440042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.341 [2024-04-18 13:47:04.440070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.341 [2024-04-18 13:47:04.440099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.341 [2024-04-18 13:47:04.440127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.341 [2024-04-18 13:47:04.440155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.341 [2024-04-18 13:47:04.440219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.341 [2024-04-18 13:47:04.440248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.341 [2024-04-18 13:47:04.440908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.341 [2024-04-18 13:47:04.440925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.440938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.440952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.440965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.440979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.440992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.342 [2024-04-18 13:47:04.441075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.342 [2024-04-18 13:47:04.441102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.441982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.441997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.442010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.442025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.442038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.442053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.342 [2024-04-18 13:47:04.442067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.342 [2024-04-18 13:47:04.442095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.342 [2024-04-18 13:47:04.442120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39360 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39368 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39376 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39384 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39392 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39400 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39408 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39416 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39424 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39432 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39440 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39448 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39456 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39464 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39472 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39480 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39488 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.442960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.442973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.442983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.442994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39496 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.443007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.443020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.443030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.443041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39504 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.443053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.443067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.443077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.443088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39512 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.443101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.443117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.443128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.443139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39520 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.443151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.443190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.443202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.443214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39528 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.443227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.443240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.443251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.443262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39536 len:8 PRP1 0x0 PRP2 0x0 00:18:08.343 [2024-04-18 13:47:04.443275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.343 [2024-04-18 13:47:04.443288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.343 [2024-04-18 13:47:04.443299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.343 [2024-04-18 13:47:04.443315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39544 len:8 PRP1 0x0 PRP2 0x0 00:18:08.344 [2024-04-18 13:47:04.443328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.344 [2024-04-18 13:47:04.443341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.344 [2024-04-18 13:47:04.443352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.344 [2024-04-18 13:47:04.443368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39552 len:8 PRP1 0x0 PRP2 0x0 00:18:08.344 [2024-04-18 13:47:04.443381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.344 [2024-04-18 13:47:04.443395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.344 [2024-04-18 13:47:04.443406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.344 [2024-04-18 13:47:04.443417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39560 len:8 PRP1 0x0 PRP2 0x0 00:18:08.344 [2024-04-18 13:47:04.443430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.344 [2024-04-18 13:47:04.443443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.344 [2024-04-18 13:47:04.443454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.344 [2024-04-18 13:47:04.443465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39568 len:8 PRP1 0x0 PRP2 0x0 00:18:08.344 [2024-04-18 13:47:04.443477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.344 [2024-04-18 13:47:04.443490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.344 [2024-04-18 13:47:04.443503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.344 [2024-04-18 13:47:04.443514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39576 len:8 PRP1 0x0 PRP2 0x0 00:18:08.344 [2024-04-18 13:47:04.443546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.344 [2024-04-18 13:47:04.443561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.344 [2024-04-18 13:47:04.443572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.344 [2024-04-18 13:47:04.443583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39584 len:8 PRP1 0x0 PRP2 0x0 00:18:08.344 [2024-04-18 13:47:04.443611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.344 [2024-04-18 13:47:04.443625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.344 [2024-04-18 13:47:04.443637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.344 [2024-04-18 13:47:04.443648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39592 len:8 PRP1 0x0 PRP2 0x0 00:18:08.344 [2024-04-18 13:47:04.443661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.344 [2024-04-18 13:47:04.443674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.344 [2024-04-18 13:47:04.443685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.344 [2024-04-18 13:47:04.443696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39600 len:8 PRP1 0x0 PRP2 0x0 00:18:08.344 [2024-04-18 13:47:04.443708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.344 [2024-04-18 13:47:04.443722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:08.344 [2024-04-18 13:47:04.443733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:08.344 [2024-04-18 13:47:04.443749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39608 len:8 PRP1 0x0 PRP2 0x0 00:18:08.344 [2024-04-18 13:47:04.443762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.344 [2024-04-18 13:47:04.443822] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xadbd90 was disconnected and freed. reset controller. 00:18:08.344 [2024-04-18 13:47:04.443840] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:18:08.344 [2024-04-18 13:47:04.443861] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:08.344 [2024-04-18 13:47:04.447080] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:08.344 [2024-04-18 13:47:04.447118] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf8a0 (9): Bad file descriptor 00:18:08.344 [2024-04-18 13:47:04.475660] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:08.344 00:18:08.344 Latency(us) 00:18:08.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.344 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:08.344 Verification LBA range: start 0x0 length 0x4000 00:18:08.344 NVMe0n1 : 15.01 8568.20 33.47 493.91 0.00 14098.94 801.00 18058.81 00:18:08.344 =================================================================================================================== 00:18:08.344 Total : 8568.20 33.47 493.91 0.00 14098.94 801.00 18058.81 00:18:08.344 Received shutdown signal, test time was about 15.000000 seconds 00:18:08.344 00:18:08.344 Latency(us) 00:18:08.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.344 =================================================================================================================== 00:18:08.344 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:08.344 13:47:10 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:08.344 13:47:10 -- host/failover.sh@65 -- # count=3 00:18:08.344 13:47:10 -- host/failover.sh@67 -- # (( count != 3 )) 00:18:08.344 13:47:10 -- host/failover.sh@73 -- # bdevperf_pid=2649120 00:18:08.344 13:47:10 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:08.344 13:47:10 -- host/failover.sh@75 -- # waitforlisten 2649120 /var/tmp/bdevperf.sock 00:18:08.344 13:47:10 -- common/autotest_common.sh@817 -- # '[' -z 2649120 ']' 00:18:08.344 13:47:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.344 13:47:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:08.344 13:47:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.344 13:47:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:08.344 13:47:10 -- common/autotest_common.sh@10 -- # set +x 00:18:08.344 13:47:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:08.344 13:47:10 -- common/autotest_common.sh@850 -- # return 0 00:18:08.344 13:47:10 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:08.344 [2024-04-18 13:47:10.857233] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:08.344 13:47:10 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:08.344 [2024-04-18 13:47:11.093806] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:08.344 13:47:11 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:08.909 NVMe0n1 00:18:08.909 13:47:11 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:09.167 00:18:09.424 13:47:11 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:09.711 00:18:09.711 13:47:12 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:09.711 13:47:12 -- host/failover.sh@82 -- # grep -q NVMe0 00:18:09.969 13:47:12 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:10.227 13:47:12 -- host/failover.sh@87 -- # sleep 3 00:18:13.504 13:47:15 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:13.504 13:47:15 -- host/failover.sh@88 -- # grep -q NVMe0 00:18:13.504 13:47:16 -- host/failover.sh@90 -- # run_test_pid=2649870 00:18:13.504 13:47:16 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:13.504 13:47:16 -- host/failover.sh@92 -- # wait 2649870 00:18:14.876 0 00:18:14.876 13:47:17 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:14.876 [2024-04-18 13:47:10.352746] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:18:14.876 [2024-04-18 13:47:10.352832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2649120 ] 00:18:14.876 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.876 [2024-04-18 13:47:10.416127] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.876 [2024-04-18 13:47:10.527741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.876 [2024-04-18 13:47:12.923706] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:14.876 [2024-04-18 13:47:12.923784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.876 [2024-04-18 13:47:12.923807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.876 [2024-04-18 13:47:12.923824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.876 [2024-04-18 13:47:12.923838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.876 [2024-04-18 13:47:12.923859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.876 [2024-04-18 13:47:12.923872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.876 [2024-04-18 13:47:12.923885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.876 [2024-04-18 13:47:12.923898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.876 [2024-04-18 13:47:12.923911] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:14.876 [2024-04-18 13:47:12.923960] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:14.876 [2024-04-18 13:47:12.923991] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c38a0 (9): Bad file descriptor 00:18:14.876 [2024-04-18 13:47:12.944894] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:14.876 Running I/O for 1 seconds... 00:18:14.876 00:18:14.876 Latency(us) 00:18:14.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.876 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:14.876 Verification LBA range: start 0x0 length 0x4000 00:18:14.876 NVMe0n1 : 1.00 8634.26 33.73 0.00 0.00 14765.20 3228.25 15340.28 00:18:14.876 =================================================================================================================== 00:18:14.877 Total : 8634.26 33.73 0.00 0.00 14765.20 3228.25 15340.28 00:18:14.877 13:47:17 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:14.877 13:47:17 -- host/failover.sh@95 -- # grep -q NVMe0 00:18:14.877 13:47:17 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:15.133 13:47:17 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:15.133 13:47:17 -- host/failover.sh@99 -- # grep -q NVMe0 00:18:15.390 13:47:18 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:15.647 13:47:18 -- host/failover.sh@101 -- # sleep 3 00:18:18.925 13:47:21 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:18.925 13:47:21 -- host/failover.sh@103 -- # grep -q NVMe0 00:18:18.925 13:47:21 -- host/failover.sh@108 -- # killprocess 2649120 00:18:18.925 13:47:21 -- common/autotest_common.sh@936 -- # '[' -z 2649120 ']' 00:18:18.925 13:47:21 -- common/autotest_common.sh@940 -- # kill -0 2649120 00:18:18.925 13:47:21 -- common/autotest_common.sh@941 -- # uname 00:18:18.925 13:47:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.925 13:47:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2649120 00:18:18.925 13:47:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:18.925 13:47:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:18.926 13:47:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2649120' 00:18:18.926 killing process with pid 2649120 00:18:18.926 13:47:21 -- common/autotest_common.sh@955 -- # kill 2649120 00:18:18.926 13:47:21 -- common/autotest_common.sh@960 -- # wait 2649120 00:18:19.183 13:47:21 -- host/failover.sh@110 -- # sync 00:18:19.183 13:47:21 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:19.440 13:47:22 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:19.440 13:47:22 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:19.440 13:47:22 -- host/failover.sh@116 -- # nvmftestfini 00:18:19.440 13:47:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:19.440 13:47:22 -- nvmf/common.sh@117 -- # sync 00:18:19.440 13:47:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:19.440 13:47:22 -- nvmf/common.sh@120 -- # set +e 00:18:19.440 13:47:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:19.440 13:47:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:19.440 rmmod nvme_tcp 00:18:19.440 rmmod nvme_fabrics 00:18:19.698 rmmod nvme_keyring 00:18:19.698 13:47:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:19.698 13:47:22 -- nvmf/common.sh@124 -- # set -e 00:18:19.698 13:47:22 -- nvmf/common.sh@125 -- # return 0 00:18:19.698 13:47:22 -- nvmf/common.sh@478 -- # '[' -n 2646918 ']' 00:18:19.698 13:47:22 -- nvmf/common.sh@479 -- # killprocess 2646918 00:18:19.698 13:47:22 -- common/autotest_common.sh@936 -- # '[' -z 2646918 ']' 00:18:19.698 13:47:22 -- common/autotest_common.sh@940 -- # kill -0 2646918 00:18:19.698 13:47:22 -- common/autotest_common.sh@941 -- # uname 00:18:19.698 13:47:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:19.698 13:47:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2646918 00:18:19.698 13:47:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:19.698 13:47:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:19.698 13:47:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2646918' 00:18:19.698 killing process with pid 2646918 00:18:19.698 13:47:22 -- common/autotest_common.sh@955 -- # kill 2646918 00:18:19.698 13:47:22 -- common/autotest_common.sh@960 -- # wait 2646918 00:18:19.956 13:47:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:19.956 13:47:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:19.956 13:47:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:19.956 13:47:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.956 13:47:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:19.956 13:47:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.956 13:47:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.956 13:47:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.866 13:47:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:21.866 00:18:21.866 real 0m35.358s 00:18:21.866 user 2m5.018s 00:18:21.866 sys 0m6.097s 00:18:21.866 13:47:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:21.866 13:47:24 -- common/autotest_common.sh@10 -- # set +x 00:18:21.866 ************************************ 00:18:21.866 END TEST nvmf_failover 00:18:21.866 ************************************ 00:18:22.123 13:47:24 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:22.123 13:47:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:22.123 13:47:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:22.123 13:47:24 -- common/autotest_common.sh@10 -- # set +x 00:18:22.123 ************************************ 00:18:22.123 START TEST nvmf_discovery 00:18:22.123 ************************************ 00:18:22.123 13:47:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:22.123 * Looking for test storage... 00:18:22.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:22.123 13:47:24 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.123 13:47:24 -- nvmf/common.sh@7 -- # uname -s 00:18:22.123 13:47:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.123 13:47:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.123 13:47:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.123 13:47:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.123 13:47:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.123 13:47:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.123 13:47:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.123 13:47:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.123 13:47:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.123 13:47:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.123 13:47:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:22.124 13:47:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:22.124 13:47:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.124 13:47:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.124 13:47:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.124 13:47:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.124 13:47:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.124 13:47:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.124 13:47:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.124 13:47:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.124 13:47:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.124 13:47:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.124 13:47:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.124 13:47:24 -- paths/export.sh@5 -- # export PATH 00:18:22.124 13:47:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.124 13:47:24 -- nvmf/common.sh@47 -- # : 0 00:18:22.124 13:47:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:22.124 13:47:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:22.124 13:47:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.124 13:47:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.124 13:47:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.124 13:47:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:22.124 13:47:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:22.124 13:47:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:22.124 13:47:24 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:22.124 13:47:24 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:22.124 13:47:24 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:22.124 13:47:24 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:22.124 13:47:24 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:22.124 13:47:24 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:22.124 13:47:24 -- host/discovery.sh@25 -- # nvmftestinit 00:18:22.124 13:47:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:22.124 13:47:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.124 13:47:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:22.124 13:47:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:22.124 13:47:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:22.124 13:47:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.124 13:47:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.124 13:47:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.124 13:47:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:22.124 13:47:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:22.124 13:47:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:22.124 13:47:24 -- common/autotest_common.sh@10 -- # set +x 00:18:24.021 13:47:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:24.021 13:47:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:24.021 13:47:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:24.021 13:47:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:24.021 13:47:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:24.021 13:47:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:24.021 13:47:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:24.021 13:47:26 -- nvmf/common.sh@295 -- # net_devs=() 00:18:24.021 13:47:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:24.021 13:47:26 -- nvmf/common.sh@296 -- # e810=() 00:18:24.021 13:47:26 -- nvmf/common.sh@296 -- # local -ga e810 00:18:24.021 13:47:26 -- nvmf/common.sh@297 -- # x722=() 00:18:24.021 13:47:26 -- nvmf/common.sh@297 -- # local -ga x722 00:18:24.021 13:47:26 -- nvmf/common.sh@298 -- # mlx=() 00:18:24.021 13:47:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:24.021 13:47:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:24.021 13:47:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:24.021 13:47:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:24.021 13:47:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:24.021 13:47:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:24.021 13:47:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:24.021 13:47:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:24.021 13:47:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:24.021 13:47:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:24.021 13:47:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:24.021 13:47:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:24.021 13:47:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:24.021 13:47:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:24.021 13:47:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:24.021 13:47:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:24.021 13:47:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:24.021 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:24.021 13:47:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:24.021 13:47:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:24.021 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:24.021 13:47:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:24.021 13:47:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:24.021 13:47:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.021 13:47:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:24.021 13:47:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.021 13:47:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:24.021 Found net devices under 0000:84:00.0: cvl_0_0 00:18:24.021 13:47:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.021 13:47:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:24.021 13:47:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.021 13:47:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:24.021 13:47:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.021 13:47:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:24.021 Found net devices under 0000:84:00.1: cvl_0_1 00:18:24.021 13:47:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.021 13:47:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:24.021 13:47:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:24.021 13:47:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:24.021 13:47:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:24.021 13:47:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.021 13:47:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:24.021 13:47:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:24.021 13:47:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:24.021 13:47:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:24.021 13:47:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:24.021 13:47:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:24.022 13:47:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:24.022 13:47:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.022 13:47:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:24.022 13:47:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:24.022 13:47:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:24.280 13:47:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:24.280 13:47:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:24.280 13:47:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:24.280 13:47:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:24.280 13:47:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:24.280 13:47:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:24.280 13:47:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:24.280 13:47:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:24.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:18:24.280 00:18:24.280 --- 10.0.0.2 ping statistics --- 00:18:24.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.280 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:18:24.280 13:47:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:24.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:18:24.280 00:18:24.280 --- 10.0.0.1 ping statistics --- 00:18:24.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.280 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:18:24.280 13:47:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.280 13:47:26 -- nvmf/common.sh@411 -- # return 0 00:18:24.280 13:47:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:24.280 13:47:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.280 13:47:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:24.280 13:47:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:24.280 13:47:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.280 13:47:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:24.280 13:47:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:24.280 13:47:26 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:24.280 13:47:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:24.280 13:47:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:24.280 13:47:26 -- common/autotest_common.sh@10 -- # set +x 00:18:24.280 13:47:26 -- nvmf/common.sh@470 -- # nvmfpid=2652538 00:18:24.280 13:47:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:24.280 13:47:26 -- nvmf/common.sh@471 -- # waitforlisten 2652538 00:18:24.280 13:47:26 -- common/autotest_common.sh@817 -- # '[' -z 2652538 ']' 00:18:24.280 13:47:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.280 13:47:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:24.280 13:47:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.280 13:47:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:24.280 13:47:26 -- common/autotest_common.sh@10 -- # set +x 00:18:24.280 [2024-04-18 13:47:27.014325] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:18:24.280 [2024-04-18 13:47:27.014402] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.280 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.280 [2024-04-18 13:47:27.083851] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.538 [2024-04-18 13:47:27.201126] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.538 [2024-04-18 13:47:27.201197] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.538 [2024-04-18 13:47:27.201240] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.538 [2024-04-18 13:47:27.201251] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.538 [2024-04-18 13:47:27.201261] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.538 [2024-04-18 13:47:27.201287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.538 13:47:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:24.538 13:47:27 -- common/autotest_common.sh@850 -- # return 0 00:18:24.538 13:47:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:24.538 13:47:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:24.538 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:24.795 13:47:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.795 13:47:27 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:24.795 13:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.795 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:24.795 [2024-04-18 13:47:27.353478] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.795 13:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.795 13:47:27 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:24.795 13:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.795 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:24.795 [2024-04-18 13:47:27.361695] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:24.795 13:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.795 13:47:27 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:24.795 13:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.795 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:24.795 null0 00:18:24.795 13:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.795 13:47:27 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:24.795 13:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.795 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:24.795 null1 00:18:24.795 13:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.795 13:47:27 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:24.795 13:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.795 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:24.795 13:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.795 13:47:27 -- host/discovery.sh@45 -- # hostpid=2652564 00:18:24.795 13:47:27 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:24.795 13:47:27 -- host/discovery.sh@46 -- # waitforlisten 2652564 /tmp/host.sock 00:18:24.795 13:47:27 -- common/autotest_common.sh@817 -- # '[' -z 2652564 ']' 00:18:24.795 13:47:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:18:24.795 13:47:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:24.795 13:47:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:24.795 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:24.795 13:47:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:24.795 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:24.795 [2024-04-18 13:47:27.433558] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:18:24.795 [2024-04-18 13:47:27.433622] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2652564 ] 00:18:24.795 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.795 [2024-04-18 13:47:27.496080] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.054 [2024-04-18 13:47:27.611433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.054 13:47:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:25.054 13:47:27 -- common/autotest_common.sh@850 -- # return 0 00:18:25.054 13:47:27 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:25.054 13:47:27 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:25.054 13:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.054 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:25.054 13:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.054 13:47:27 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:25.054 13:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.054 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:25.054 13:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.054 13:47:27 -- host/discovery.sh@72 -- # notify_id=0 00:18:25.054 13:47:27 -- host/discovery.sh@83 -- # get_subsystem_names 00:18:25.054 13:47:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:25.054 13:47:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:25.054 13:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.054 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:25.054 13:47:27 -- host/discovery.sh@59 -- # sort 00:18:25.054 13:47:27 -- host/discovery.sh@59 -- # xargs 00:18:25.054 13:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.054 13:47:27 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:25.054 13:47:27 -- host/discovery.sh@84 -- # get_bdev_list 00:18:25.054 13:47:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:25.054 13:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.054 13:47:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:25.054 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:25.054 13:47:27 -- host/discovery.sh@55 -- # sort 00:18:25.054 13:47:27 -- host/discovery.sh@55 -- # xargs 00:18:25.054 13:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.054 13:47:27 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:25.054 13:47:27 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:25.054 13:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.054 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:25.054 13:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.054 13:47:27 -- host/discovery.sh@87 -- # get_subsystem_names 00:18:25.054 13:47:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:25.054 13:47:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:25.054 13:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.054 13:47:27 -- host/discovery.sh@59 -- # sort 00:18:25.054 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:25.054 13:47:27 -- host/discovery.sh@59 -- # xargs 00:18:25.054 13:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.312 13:47:27 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:25.312 13:47:27 -- host/discovery.sh@88 -- # get_bdev_list 00:18:25.312 13:47:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:25.312 13:47:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:25.312 13:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.312 13:47:27 -- host/discovery.sh@55 -- # sort 00:18:25.312 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:25.312 13:47:27 -- host/discovery.sh@55 -- # xargs 00:18:25.312 13:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.312 13:47:27 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:25.312 13:47:27 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:25.312 13:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.312 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:25.312 13:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.312 13:47:27 -- host/discovery.sh@91 -- # get_subsystem_names 00:18:25.312 13:47:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:25.312 13:47:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:25.312 13:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.312 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:25.312 13:47:27 -- host/discovery.sh@59 -- # sort 00:18:25.312 13:47:27 -- host/discovery.sh@59 -- # xargs 00:18:25.312 13:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.312 13:47:27 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:25.312 13:47:27 -- host/discovery.sh@92 -- # get_bdev_list 00:18:25.312 13:47:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:25.312 13:47:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:25.312 13:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.312 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:18:25.312 13:47:27 -- host/discovery.sh@55 -- # sort 00:18:25.312 13:47:27 -- host/discovery.sh@55 -- # xargs 00:18:25.312 13:47:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.312 13:47:28 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:25.312 13:47:28 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:25.312 13:47:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.312 13:47:28 -- common/autotest_common.sh@10 -- # set +x 00:18:25.312 [2024-04-18 13:47:28.039446] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.312 13:47:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.313 13:47:28 -- host/discovery.sh@97 -- # get_subsystem_names 00:18:25.313 13:47:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:25.313 13:47:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.313 13:47:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:25.313 13:47:28 -- common/autotest_common.sh@10 -- # set +x 00:18:25.313 13:47:28 -- host/discovery.sh@59 -- # sort 00:18:25.313 13:47:28 -- host/discovery.sh@59 -- # xargs 00:18:25.313 13:47:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.313 13:47:28 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:25.313 13:47:28 -- host/discovery.sh@98 -- # get_bdev_list 00:18:25.313 13:47:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:25.313 13:47:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:25.313 13:47:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.313 13:47:28 -- common/autotest_common.sh@10 -- # set +x 00:18:25.313 13:47:28 -- host/discovery.sh@55 -- # sort 00:18:25.313 13:47:28 -- host/discovery.sh@55 -- # xargs 00:18:25.313 13:47:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.571 13:47:28 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:25.571 13:47:28 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:25.571 13:47:28 -- host/discovery.sh@79 -- # expected_count=0 00:18:25.571 13:47:28 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:25.571 13:47:28 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:25.571 13:47:28 -- common/autotest_common.sh@901 -- # local max=10 00:18:25.571 13:47:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:25.571 13:47:28 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:25.571 13:47:28 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:25.571 13:47:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:25.571 13:47:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.571 13:47:28 -- host/discovery.sh@74 -- # jq '. | length' 00:18:25.571 13:47:28 -- common/autotest_common.sh@10 -- # set +x 00:18:25.571 13:47:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.571 13:47:28 -- host/discovery.sh@74 -- # notification_count=0 00:18:25.571 13:47:28 -- host/discovery.sh@75 -- # notify_id=0 00:18:25.571 13:47:28 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:25.571 13:47:28 -- common/autotest_common.sh@904 -- # return 0 00:18:25.571 13:47:28 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:25.571 13:47:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.571 13:47:28 -- common/autotest_common.sh@10 -- # set +x 00:18:25.571 13:47:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.571 13:47:28 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:25.571 13:47:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:25.571 13:47:28 -- common/autotest_common.sh@901 -- # local max=10 00:18:25.571 13:47:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:25.571 13:47:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:25.571 13:47:28 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:25.571 13:47:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:25.571 13:47:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.571 13:47:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:25.571 13:47:28 -- common/autotest_common.sh@10 -- # set +x 00:18:25.571 13:47:28 -- host/discovery.sh@59 -- # sort 00:18:25.571 13:47:28 -- host/discovery.sh@59 -- # xargs 00:18:25.571 13:47:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.571 13:47:28 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:18:25.571 13:47:28 -- common/autotest_common.sh@906 -- # sleep 1 00:18:26.138 [2024-04-18 13:47:28.791383] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:26.138 [2024-04-18 13:47:28.791418] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:26.138 [2024-04-18 13:47:28.791443] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:26.138 [2024-04-18 13:47:28.878738] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:26.138 [2024-04-18 13:47:28.941586] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:26.138 [2024-04-18 13:47:28.941613] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:26.704 13:47:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:26.704 13:47:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:26.704 13:47:29 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:26.704 13:47:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:26.704 13:47:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:26.704 13:47:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.704 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:18:26.704 13:47:29 -- host/discovery.sh@59 -- # sort 00:18:26.704 13:47:29 -- host/discovery.sh@59 -- # xargs 00:18:26.704 13:47:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.704 13:47:29 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.704 13:47:29 -- common/autotest_common.sh@904 -- # return 0 00:18:26.704 13:47:29 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:26.704 13:47:29 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:26.704 13:47:29 -- common/autotest_common.sh@901 -- # local max=10 00:18:26.704 13:47:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:26.704 13:47:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:26.704 13:47:29 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:26.704 13:47:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.704 13:47:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:26.704 13:47:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.704 13:47:29 -- host/discovery.sh@55 -- # sort 00:18:26.704 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:18:26.704 13:47:29 -- host/discovery.sh@55 -- # xargs 00:18:26.704 13:47:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.704 13:47:29 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:26.705 13:47:29 -- common/autotest_common.sh@904 -- # return 0 00:18:26.705 13:47:29 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:26.705 13:47:29 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:26.705 13:47:29 -- common/autotest_common.sh@901 -- # local max=10 00:18:26.705 13:47:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:26.705 13:47:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:26.705 13:47:29 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:26.705 13:47:29 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:26.705 13:47:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.705 13:47:29 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:26.705 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:18:26.705 13:47:29 -- host/discovery.sh@63 -- # sort -n 00:18:26.705 13:47:29 -- host/discovery.sh@63 -- # xargs 00:18:26.705 13:47:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.705 13:47:29 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:18:26.705 13:47:29 -- common/autotest_common.sh@904 -- # return 0 00:18:26.705 13:47:29 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:26.705 13:47:29 -- host/discovery.sh@79 -- # expected_count=1 00:18:26.705 13:47:29 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:26.705 13:47:29 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:26.705 13:47:29 -- common/autotest_common.sh@901 -- # local max=10 00:18:26.705 13:47:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:26.705 13:47:29 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:26.705 13:47:29 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:26.705 13:47:29 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:26.705 13:47:29 -- host/discovery.sh@74 -- # jq '. | length' 00:18:26.705 13:47:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.705 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:18:26.705 13:47:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.705 13:47:29 -- host/discovery.sh@74 -- # notification_count=1 00:18:26.705 13:47:29 -- host/discovery.sh@75 -- # notify_id=1 00:18:26.705 13:47:29 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:26.705 13:47:29 -- common/autotest_common.sh@904 -- # return 0 00:18:26.705 13:47:29 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:26.705 13:47:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.705 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:18:26.705 13:47:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.705 13:47:29 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:26.705 13:47:29 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:26.705 13:47:29 -- common/autotest_common.sh@901 -- # local max=10 00:18:26.705 13:47:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:26.705 13:47:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:26.705 13:47:29 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:26.705 13:47:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.705 13:47:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:26.705 13:47:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.705 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:18:26.705 13:47:29 -- host/discovery.sh@55 -- # sort 00:18:26.705 13:47:29 -- host/discovery.sh@55 -- # xargs 00:18:26.996 13:47:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.996 13:47:29 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:26.996 13:47:29 -- common/autotest_common.sh@904 -- # return 0 00:18:26.996 13:47:29 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:26.996 13:47:29 -- host/discovery.sh@79 -- # expected_count=1 00:18:26.996 13:47:29 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:26.996 13:47:29 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:26.996 13:47:29 -- common/autotest_common.sh@901 -- # local max=10 00:18:26.996 13:47:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:26.996 13:47:29 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:26.996 13:47:29 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:26.996 13:47:29 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:26.996 13:47:29 -- host/discovery.sh@74 -- # jq '. | length' 00:18:26.996 13:47:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.996 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:18:26.996 13:47:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.996 13:47:29 -- host/discovery.sh@74 -- # notification_count=1 00:18:26.996 13:47:29 -- host/discovery.sh@75 -- # notify_id=2 00:18:26.996 13:47:29 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:26.996 13:47:29 -- common/autotest_common.sh@904 -- # return 0 00:18:26.996 13:47:29 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:26.996 13:47:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.996 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:18:26.996 [2024-04-18 13:47:29.700260] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:26.996 [2024-04-18 13:47:29.700777] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:26.996 [2024-04-18 13:47:29.700813] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:26.996 13:47:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.996 13:47:29 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:26.996 13:47:29 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:26.996 13:47:29 -- common/autotest_common.sh@901 -- # local max=10 00:18:26.996 13:47:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:26.996 13:47:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:26.996 13:47:29 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:26.996 13:47:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:26.996 13:47:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.996 13:47:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:26.996 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:18:26.996 13:47:29 -- host/discovery.sh@59 -- # sort 00:18:26.996 13:47:29 -- host/discovery.sh@59 -- # xargs 00:18:26.996 13:47:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.996 13:47:29 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.996 13:47:29 -- common/autotest_common.sh@904 -- # return 0 00:18:26.996 13:47:29 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:26.996 13:47:29 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:26.996 13:47:29 -- common/autotest_common.sh@901 -- # local max=10 00:18:26.996 13:47:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:26.996 13:47:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:26.996 13:47:29 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:26.996 13:47:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.996 13:47:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.996 13:47:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:26.996 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:18:26.996 13:47:29 -- host/discovery.sh@55 -- # sort 00:18:26.996 13:47:29 -- host/discovery.sh@55 -- # xargs 00:18:26.996 13:47:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.279 [2024-04-18 13:47:29.786505] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:18:27.279 13:47:29 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:27.279 13:47:29 -- common/autotest_common.sh@904 -- # return 0 00:18:27.279 13:47:29 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:27.279 13:47:29 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:27.279 13:47:29 -- common/autotest_common.sh@901 -- # local max=10 00:18:27.279 13:47:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:27.279 13:47:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:27.279 13:47:29 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:27.279 13:47:29 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:27.279 13:47:29 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:27.279 13:47:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.279 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:18:27.279 13:47:29 -- host/discovery.sh@63 -- # sort -n 00:18:27.279 13:47:29 -- host/discovery.sh@63 -- # xargs 00:18:27.279 13:47:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.279 13:47:29 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:27.279 13:47:29 -- common/autotest_common.sh@906 -- # sleep 1 00:18:27.279 [2024-04-18 13:47:30.047771] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:27.279 [2024-04-18 13:47:30.047810] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:27.279 [2024-04-18 13:47:30.047820] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:28.213 13:47:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:28.213 13:47:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:28.213 13:47:30 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:28.213 13:47:30 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:28.213 13:47:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.213 13:47:30 -- common/autotest_common.sh@10 -- # set +x 00:18:28.213 13:47:30 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:28.213 13:47:30 -- host/discovery.sh@63 -- # sort -n 00:18:28.213 13:47:30 -- host/discovery.sh@63 -- # xargs 00:18:28.213 13:47:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.213 13:47:30 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:28.213 13:47:30 -- common/autotest_common.sh@904 -- # return 0 00:18:28.213 13:47:30 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:28.213 13:47:30 -- host/discovery.sh@79 -- # expected_count=0 00:18:28.214 13:47:30 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:28.214 13:47:30 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:28.214 13:47:30 -- common/autotest_common.sh@901 -- # local max=10 00:18:28.214 13:47:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:28.214 13:47:30 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:28.214 13:47:30 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:28.214 13:47:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:28.214 13:47:30 -- host/discovery.sh@74 -- # jq '. | length' 00:18:28.214 13:47:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.214 13:47:30 -- common/autotest_common.sh@10 -- # set +x 00:18:28.214 13:47:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.214 13:47:30 -- host/discovery.sh@74 -- # notification_count=0 00:18:28.214 13:47:30 -- host/discovery.sh@75 -- # notify_id=2 00:18:28.214 13:47:30 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:28.214 13:47:30 -- common/autotest_common.sh@904 -- # return 0 00:18:28.214 13:47:30 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:28.214 13:47:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.214 13:47:30 -- common/autotest_common.sh@10 -- # set +x 00:18:28.214 [2024-04-18 13:47:30.924730] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:28.214 [2024-04-18 13:47:30.924775] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:28.214 13:47:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.214 13:47:30 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:28.214 13:47:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:28.214 13:47:30 -- common/autotest_common.sh@901 -- # local max=10 00:18:28.214 13:47:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:28.214 13:47:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:28.214 13:47:30 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:28.214 13:47:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:28.214 13:47:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.214 13:47:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:28.214 13:47:30 -- common/autotest_common.sh@10 -- # set +x 00:18:28.214 13:47:30 -- host/discovery.sh@59 -- # sort 00:18:28.214 13:47:30 -- host/discovery.sh@59 -- # xargs 00:18:28.214 [2024-04-18 13:47:30.932812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.214 [2024-04-18 13:47:30.932850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.214 [2024-04-18 13:47:30.932869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.214 [2024-04-18 13:47:30.932884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.214 [2024-04-18 13:47:30.932899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.214 [2024-04-18 13:47:30.932914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.214 [2024-04-18 13:47:30.932929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.214 [2024-04-18 13:47:30.932944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.214 [2024-04-18 13:47:30.932960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b1df0 is same with the state(5) to be set 00:18:28.214 13:47:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.214 [2024-04-18 13:47:30.942814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1df0 (9): Bad file descriptor 00:18:28.214 [2024-04-18 13:47:30.952860] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:28.214 [2024-04-18 13:47:30.953130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.214 [2024-04-18 13:47:30.953290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.214 [2024-04-18 13:47:30.953318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b1df0 with addr=10.0.0.2, port=4420 00:18:28.214 [2024-04-18 13:47:30.953335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b1df0 is same with the state(5) to be set 00:18:28.214 [2024-04-18 13:47:30.953359] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1df0 (9): Bad file descriptor 00:18:28.214 [2024-04-18 13:47:30.953397] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.214 [2024-04-18 13:47:30.953416] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.214 [2024-04-18 13:47:30.953435] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.214 [2024-04-18 13:47:30.953457] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.214 [2024-04-18 13:47:30.962938] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:28.214 [2024-04-18 13:47:30.963087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.214 [2024-04-18 13:47:30.963282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.214 [2024-04-18 13:47:30.963323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b1df0 with addr=10.0.0.2, port=4420 00:18:28.214 [2024-04-18 13:47:30.963339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b1df0 is same with the state(5) to be set 00:18:28.214 [2024-04-18 13:47:30.963366] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1df0 (9): Bad file descriptor 00:18:28.214 [2024-04-18 13:47:30.963387] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.214 [2024-04-18 13:47:30.963401] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.214 [2024-04-18 13:47:30.963414] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.214 [2024-04-18 13:47:30.963434] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.214 13:47:30 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.214 13:47:30 -- common/autotest_common.sh@904 -- # return 0 00:18:28.214 13:47:30 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:28.214 13:47:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:28.214 13:47:30 -- common/autotest_common.sh@901 -- # local max=10 00:18:28.214 13:47:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:28.214 13:47:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:28.214 13:47:30 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:28.214 13:47:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.214 13:47:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:28.214 13:47:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.214 13:47:30 -- common/autotest_common.sh@10 -- # set +x 00:18:28.214 13:47:30 -- host/discovery.sh@55 -- # sort 00:18:28.214 13:47:30 -- host/discovery.sh@55 -- # xargs 00:18:28.214 [2024-04-18 13:47:30.973008] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:28.214 [2024-04-18 13:47:30.973205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.214 [2024-04-18 13:47:30.973348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.214 [2024-04-18 13:47:30.973374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b1df0 with addr=10.0.0.2, port=4420 00:18:28.214 [2024-04-18 13:47:30.973391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b1df0 is same with the state(5) to be set 00:18:28.214 [2024-04-18 13:47:30.973413] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1df0 (9): Bad file descriptor 00:18:28.214 [2024-04-18 13:47:30.974190] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.214 [2024-04-18 13:47:30.974215] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.214 [2024-04-18 13:47:30.974230] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.214 [2024-04-18 13:47:30.974263] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.214 [2024-04-18 13:47:30.983080] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:28.214 [2024-04-18 13:47:30.983326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.214 [2024-04-18 13:47:30.983713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.214 [2024-04-18 13:47:30.983738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b1df0 with addr=10.0.0.2, port=4420 00:18:28.214 [2024-04-18 13:47:30.983753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b1df0 is same with the state(5) to be set 00:18:28.214 [2024-04-18 13:47:30.983774] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1df0 (9): Bad file descriptor 00:18:28.214 [2024-04-18 13:47:30.983805] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.214 [2024-04-18 13:47:30.983825] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.214 [2024-04-18 13:47:30.983844] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.214 [2024-04-18 13:47:30.983863] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.214 [2024-04-18 13:47:30.993151] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:28.214 [2024-04-18 13:47:30.993571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.214 [2024-04-18 13:47:30.993745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.214 [2024-04-18 13:47:30.993770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b1df0 with addr=10.0.0.2, port=4420 00:18:28.214 [2024-04-18 13:47:30.993785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b1df0 is same with the state(5) to be set 00:18:28.214 [2024-04-18 13:47:30.993806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1df0 (9): Bad file descriptor 00:18:28.214 [2024-04-18 13:47:30.993838] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.214 [2024-04-18 13:47:30.993854] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.214 [2024-04-18 13:47:30.993868] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.215 [2024-04-18 13:47:30.993897] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.215 13:47:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.215 [2024-04-18 13:47:31.003427] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:28.215 [2024-04-18 13:47:31.003702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.215 [2024-04-18 13:47:31.003871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.215 [2024-04-18 13:47:31.003895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b1df0 with addr=10.0.0.2, port=4420 00:18:28.215 [2024-04-18 13:47:31.003910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b1df0 is same with the state(5) to be set 00:18:28.215 [2024-04-18 13:47:31.003930] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1df0 (9): Bad file descriptor 00:18:28.215 [2024-04-18 13:47:31.003961] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.215 [2024-04-18 13:47:31.003978] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.215 [2024-04-18 13:47:31.003990] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.215 [2024-04-18 13:47:31.004008] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.215 13:47:31 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:28.215 13:47:31 -- common/autotest_common.sh@904 -- # return 0 00:18:28.215 13:47:31 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:28.215 13:47:31 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:28.215 13:47:31 -- common/autotest_common.sh@901 -- # local max=10 00:18:28.215 13:47:31 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:28.215 13:47:31 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:28.215 13:47:31 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:28.215 13:47:31 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:28.215 [2024-04-18 13:47:31.013522] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:28.215 13:47:31 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:28.215 13:47:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.215 13:47:31 -- host/discovery.sh@63 -- # sort -n 00:18:28.215 13:47:31 -- common/autotest_common.sh@10 -- # set +x 00:18:28.215 [2024-04-18 13:47:31.013734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.215 [2024-04-18 13:47:31.013944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.215 [2024-04-18 13:47:31.013969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b1df0 with addr=10.0.0.2, port=4420 00:18:28.215 [2024-04-18 13:47:31.013984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b1df0 is same with the state(5) to be set 00:18:28.215 [2024-04-18 13:47:31.014013] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1df0 (9): Bad file descriptor 00:18:28.215 [2024-04-18 13:47:31.014056] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.215 [2024-04-18 13:47:31.014074] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.215 [2024-04-18 13:47:31.014087] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.215 [2024-04-18 13:47:31.014105] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.215 13:47:31 -- host/discovery.sh@63 -- # xargs 00:18:28.473 [2024-04-18 13:47:31.023609] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:28.473 [2024-04-18 13:47:31.023801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.473 [2024-04-18 13:47:31.023984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.473 [2024-04-18 13:47:31.024020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b1df0 with addr=10.0.0.2, port=4420 00:18:28.473 [2024-04-18 13:47:31.024046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b1df0 is same with the state(5) to be set 00:18:28.473 [2024-04-18 13:47:31.024073] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1df0 (9): Bad file descriptor 00:18:28.473 [2024-04-18 13:47:31.024094] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.473 [2024-04-18 13:47:31.024108] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.473 [2024-04-18 13:47:31.024121] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.473 [2024-04-18 13:47:31.024141] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.473 13:47:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.473 [2024-04-18 13:47:31.033686] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:28.473 [2024-04-18 13:47:31.033853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.473 [2024-04-18 13:47:31.034017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.473 [2024-04-18 13:47:31.034047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b1df0 with addr=10.0.0.2, port=4420 00:18:28.473 [2024-04-18 13:47:31.034065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b1df0 is same with the state(5) to be set 00:18:28.473 [2024-04-18 13:47:31.034090] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1df0 (9): Bad file descriptor 00:18:28.473 [2024-04-18 13:47:31.034128] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.473 [2024-04-18 13:47:31.034148] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.473 [2024-04-18 13:47:31.034164] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.473 [2024-04-18 13:47:31.034194] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.473 [2024-04-18 13:47:31.043765] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:28.473 [2024-04-18 13:47:31.043980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.473 [2024-04-18 13:47:31.044138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.473 [2024-04-18 13:47:31.044167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b1df0 with addr=10.0.0.2, port=4420 00:18:28.473 [2024-04-18 13:47:31.044194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b1df0 is same with the state(5) to be set 00:18:28.473 [2024-04-18 13:47:31.044233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1df0 (9): Bad file descriptor 00:18:28.473 [2024-04-18 13:47:31.044254] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.473 [2024-04-18 13:47:31.044268] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.473 [2024-04-18 13:47:31.044282] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.473 [2024-04-18 13:47:31.044300] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.473 [2024-04-18 13:47:31.050372] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:28.473 [2024-04-18 13:47:31.050404] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:28.473 13:47:31 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:18:28.473 13:47:31 -- common/autotest_common.sh@906 -- # sleep 1 00:18:29.408 13:47:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:29.408 13:47:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:29.408 13:47:32 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:29.408 13:47:32 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:29.408 13:47:32 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:29.408 13:47:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:29.408 13:47:32 -- common/autotest_common.sh@10 -- # set +x 00:18:29.408 13:47:32 -- host/discovery.sh@63 -- # sort -n 00:18:29.408 13:47:32 -- host/discovery.sh@63 -- # xargs 00:18:29.408 13:47:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:29.408 13:47:32 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:18:29.408 13:47:32 -- common/autotest_common.sh@904 -- # return 0 00:18:29.408 13:47:32 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:29.408 13:47:32 -- host/discovery.sh@79 -- # expected_count=0 00:18:29.408 13:47:32 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:29.408 13:47:32 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:29.408 13:47:32 -- common/autotest_common.sh@901 -- # local max=10 00:18:29.408 13:47:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:29.408 13:47:32 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:29.408 13:47:32 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:29.408 13:47:32 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:29.408 13:47:32 -- host/discovery.sh@74 -- # jq '. | length' 00:18:29.408 13:47:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:29.408 13:47:32 -- common/autotest_common.sh@10 -- # set +x 00:18:29.408 13:47:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:29.408 13:47:32 -- host/discovery.sh@74 -- # notification_count=0 00:18:29.408 13:47:32 -- host/discovery.sh@75 -- # notify_id=2 00:18:29.408 13:47:32 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:29.408 13:47:32 -- common/autotest_common.sh@904 -- # return 0 00:18:29.408 13:47:32 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:29.408 13:47:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:29.408 13:47:32 -- common/autotest_common.sh@10 -- # set +x 00:18:29.408 13:47:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:29.408 13:47:32 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:29.408 13:47:32 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:29.408 13:47:32 -- common/autotest_common.sh@901 -- # local max=10 00:18:29.408 13:47:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:29.408 13:47:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:29.408 13:47:32 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:29.408 13:47:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:29.408 13:47:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:29.408 13:47:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:29.408 13:47:32 -- host/discovery.sh@59 -- # sort 00:18:29.408 13:47:32 -- common/autotest_common.sh@10 -- # set +x 00:18:29.408 13:47:32 -- host/discovery.sh@59 -- # xargs 00:18:29.408 13:47:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:29.408 13:47:32 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:18:29.408 13:47:32 -- common/autotest_common.sh@904 -- # return 0 00:18:29.408 13:47:32 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:29.408 13:47:32 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:29.408 13:47:32 -- common/autotest_common.sh@901 -- # local max=10 00:18:29.408 13:47:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:29.408 13:47:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:29.408 13:47:32 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:29.408 13:47:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:29.408 13:47:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:29.408 13:47:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:29.408 13:47:32 -- common/autotest_common.sh@10 -- # set +x 00:18:29.408 13:47:32 -- host/discovery.sh@55 -- # sort 00:18:29.408 13:47:32 -- host/discovery.sh@55 -- # xargs 00:18:29.408 13:47:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:29.666 13:47:32 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:18:29.666 13:47:32 -- common/autotest_common.sh@904 -- # return 0 00:18:29.666 13:47:32 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:29.666 13:47:32 -- host/discovery.sh@79 -- # expected_count=2 00:18:29.666 13:47:32 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:29.666 13:47:32 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:29.666 13:47:32 -- common/autotest_common.sh@901 -- # local max=10 00:18:29.666 13:47:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:29.666 13:47:32 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:29.666 13:47:32 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:29.666 13:47:32 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:29.666 13:47:32 -- host/discovery.sh@74 -- # jq '. | length' 00:18:29.666 13:47:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:29.666 13:47:32 -- common/autotest_common.sh@10 -- # set +x 00:18:29.666 13:47:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:29.666 13:47:32 -- host/discovery.sh@74 -- # notification_count=2 00:18:29.666 13:47:32 -- host/discovery.sh@75 -- # notify_id=4 00:18:29.666 13:47:32 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:29.666 13:47:32 -- common/autotest_common.sh@904 -- # return 0 00:18:29.666 13:47:32 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:29.666 13:47:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:29.666 13:47:32 -- common/autotest_common.sh@10 -- # set +x 00:18:30.599 [2024-04-18 13:47:33.292852] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:30.599 [2024-04-18 13:47:33.292881] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:30.599 [2024-04-18 13:47:33.292908] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:30.599 [2024-04-18 13:47:33.380193] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:18:30.856 [2024-04-18 13:47:33.651220] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:30.856 [2024-04-18 13:47:33.651270] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:30.856 13:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:30.856 13:47:33 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:30.856 13:47:33 -- common/autotest_common.sh@638 -- # local es=0 00:18:30.856 13:47:33 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:30.856 13:47:33 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:30.856 13:47:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:30.856 13:47:33 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:30.856 13:47:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:30.856 13:47:33 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:30.856 13:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.856 13:47:33 -- common/autotest_common.sh@10 -- # set +x 00:18:31.115 request: 00:18:31.115 { 00:18:31.115 "name": "nvme", 00:18:31.115 "trtype": "tcp", 00:18:31.115 "traddr": "10.0.0.2", 00:18:31.115 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:31.115 "adrfam": "ipv4", 00:18:31.115 "trsvcid": "8009", 00:18:31.115 "wait_for_attach": true, 00:18:31.115 "method": "bdev_nvme_start_discovery", 00:18:31.115 "req_id": 1 00:18:31.115 } 00:18:31.115 Got JSON-RPC error response 00:18:31.115 response: 00:18:31.115 { 00:18:31.115 "code": -17, 00:18:31.115 "message": "File exists" 00:18:31.115 } 00:18:31.115 13:47:33 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:31.115 13:47:33 -- common/autotest_common.sh@641 -- # es=1 00:18:31.115 13:47:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:31.115 13:47:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:31.115 13:47:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:31.115 13:47:33 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:31.115 13:47:33 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:31.115 13:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.115 13:47:33 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:31.115 13:47:33 -- common/autotest_common.sh@10 -- # set +x 00:18:31.115 13:47:33 -- host/discovery.sh@67 -- # sort 00:18:31.115 13:47:33 -- host/discovery.sh@67 -- # xargs 00:18:31.115 13:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.115 13:47:33 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:31.115 13:47:33 -- host/discovery.sh@146 -- # get_bdev_list 00:18:31.115 13:47:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:31.115 13:47:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:31.115 13:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.115 13:47:33 -- host/discovery.sh@55 -- # sort 00:18:31.115 13:47:33 -- common/autotest_common.sh@10 -- # set +x 00:18:31.115 13:47:33 -- host/discovery.sh@55 -- # xargs 00:18:31.115 13:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.116 13:47:33 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:31.116 13:47:33 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:31.116 13:47:33 -- common/autotest_common.sh@638 -- # local es=0 00:18:31.116 13:47:33 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:31.116 13:47:33 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:31.116 13:47:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:31.116 13:47:33 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:31.116 13:47:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:31.116 13:47:33 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:31.116 13:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.116 13:47:33 -- common/autotest_common.sh@10 -- # set +x 00:18:31.116 request: 00:18:31.116 { 00:18:31.116 "name": "nvme_second", 00:18:31.116 "trtype": "tcp", 00:18:31.116 "traddr": "10.0.0.2", 00:18:31.116 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:31.116 "adrfam": "ipv4", 00:18:31.116 "trsvcid": "8009", 00:18:31.116 "wait_for_attach": true, 00:18:31.116 "method": "bdev_nvme_start_discovery", 00:18:31.116 "req_id": 1 00:18:31.116 } 00:18:31.116 Got JSON-RPC error response 00:18:31.116 response: 00:18:31.116 { 00:18:31.116 "code": -17, 00:18:31.116 "message": "File exists" 00:18:31.116 } 00:18:31.116 13:47:33 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:31.116 13:47:33 -- common/autotest_common.sh@641 -- # es=1 00:18:31.116 13:47:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:31.116 13:47:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:31.116 13:47:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:31.116 13:47:33 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:31.116 13:47:33 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:31.116 13:47:33 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:31.116 13:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.116 13:47:33 -- common/autotest_common.sh@10 -- # set +x 00:18:31.116 13:47:33 -- host/discovery.sh@67 -- # sort 00:18:31.116 13:47:33 -- host/discovery.sh@67 -- # xargs 00:18:31.116 13:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.116 13:47:33 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:31.116 13:47:33 -- host/discovery.sh@152 -- # get_bdev_list 00:18:31.116 13:47:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:31.116 13:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.116 13:47:33 -- common/autotest_common.sh@10 -- # set +x 00:18:31.116 13:47:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:31.116 13:47:33 -- host/discovery.sh@55 -- # sort 00:18:31.116 13:47:33 -- host/discovery.sh@55 -- # xargs 00:18:31.116 13:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.116 13:47:33 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:31.116 13:47:33 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:31.116 13:47:33 -- common/autotest_common.sh@638 -- # local es=0 00:18:31.116 13:47:33 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:31.116 13:47:33 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:31.116 13:47:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:31.116 13:47:33 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:31.116 13:47:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:31.116 13:47:33 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:31.116 13:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.116 13:47:33 -- common/autotest_common.sh@10 -- # set +x 00:18:32.050 [2024-04-18 13:47:34.838690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:32.050 [2024-04-18 13:47:34.838912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:32.050 [2024-04-18 13:47:34.838942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11306b0 with addr=10.0.0.2, port=8010 00:18:32.050 [2024-04-18 13:47:34.838969] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:32.050 [2024-04-18 13:47:34.838985] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:32.050 [2024-04-18 13:47:34.838999] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:33.432 [2024-04-18 13:47:35.841227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:33.432 [2024-04-18 13:47:35.841521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:33.432 [2024-04-18 13:47:35.841552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11306b0 with addr=10.0.0.2, port=8010 00:18:33.432 [2024-04-18 13:47:35.841586] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:33.432 [2024-04-18 13:47:35.841613] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:33.432 [2024-04-18 13:47:35.841639] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:34.364 [2024-04-18 13:47:36.843226] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:18:34.364 request: 00:18:34.364 { 00:18:34.364 "name": "nvme_second", 00:18:34.364 "trtype": "tcp", 00:18:34.364 "traddr": "10.0.0.2", 00:18:34.364 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:34.364 "adrfam": "ipv4", 00:18:34.364 "trsvcid": "8010", 00:18:34.364 "attach_timeout_ms": 3000, 00:18:34.364 "method": "bdev_nvme_start_discovery", 00:18:34.364 "req_id": 1 00:18:34.364 } 00:18:34.364 Got JSON-RPC error response 00:18:34.364 response: 00:18:34.364 { 00:18:34.364 "code": -110, 00:18:34.364 "message": "Connection timed out" 00:18:34.364 } 00:18:34.364 13:47:36 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:34.364 13:47:36 -- common/autotest_common.sh@641 -- # es=1 00:18:34.364 13:47:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:34.364 13:47:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:34.364 13:47:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:34.364 13:47:36 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:34.364 13:47:36 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:34.364 13:47:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.364 13:47:36 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:34.364 13:47:36 -- common/autotest_common.sh@10 -- # set +x 00:18:34.364 13:47:36 -- host/discovery.sh@67 -- # sort 00:18:34.364 13:47:36 -- host/discovery.sh@67 -- # xargs 00:18:34.364 13:47:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.364 13:47:36 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:34.364 13:47:36 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:34.364 13:47:36 -- host/discovery.sh@161 -- # kill 2652564 00:18:34.364 13:47:36 -- host/discovery.sh@162 -- # nvmftestfini 00:18:34.364 13:47:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:34.364 13:47:36 -- nvmf/common.sh@117 -- # sync 00:18:34.364 13:47:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:34.364 13:47:36 -- nvmf/common.sh@120 -- # set +e 00:18:34.364 13:47:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:34.364 13:47:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:34.364 rmmod nvme_tcp 00:18:34.364 rmmod nvme_fabrics 00:18:34.364 rmmod nvme_keyring 00:18:34.364 13:47:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:34.364 13:47:36 -- nvmf/common.sh@124 -- # set -e 00:18:34.364 13:47:36 -- nvmf/common.sh@125 -- # return 0 00:18:34.364 13:47:36 -- nvmf/common.sh@478 -- # '[' -n 2652538 ']' 00:18:34.364 13:47:36 -- nvmf/common.sh@479 -- # killprocess 2652538 00:18:34.365 13:47:36 -- common/autotest_common.sh@936 -- # '[' -z 2652538 ']' 00:18:34.365 13:47:36 -- common/autotest_common.sh@940 -- # kill -0 2652538 00:18:34.365 13:47:36 -- common/autotest_common.sh@941 -- # uname 00:18:34.365 13:47:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:34.365 13:47:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2652538 00:18:34.365 13:47:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:34.365 13:47:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:34.365 13:47:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2652538' 00:18:34.365 killing process with pid 2652538 00:18:34.365 13:47:36 -- common/autotest_common.sh@955 -- # kill 2652538 00:18:34.365 13:47:36 -- common/autotest_common.sh@960 -- # wait 2652538 00:18:34.624 13:47:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:34.624 13:47:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:34.624 13:47:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:34.624 13:47:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:34.624 13:47:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:34.624 13:47:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.624 13:47:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.624 13:47:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.525 13:47:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:36.525 00:18:36.525 real 0m14.478s 00:18:36.525 user 0m21.593s 00:18:36.525 sys 0m2.956s 00:18:36.525 13:47:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:36.525 13:47:39 -- common/autotest_common.sh@10 -- # set +x 00:18:36.525 ************************************ 00:18:36.525 END TEST nvmf_discovery 00:18:36.525 ************************************ 00:18:36.525 13:47:39 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:36.525 13:47:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:36.525 13:47:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:36.525 13:47:39 -- common/autotest_common.sh@10 -- # set +x 00:18:36.784 ************************************ 00:18:36.784 START TEST nvmf_discovery_remove_ifc 00:18:36.784 ************************************ 00:18:36.784 13:47:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:36.784 * Looking for test storage... 00:18:36.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:36.784 13:47:39 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.784 13:47:39 -- nvmf/common.sh@7 -- # uname -s 00:18:36.784 13:47:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.784 13:47:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.784 13:47:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.784 13:47:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.784 13:47:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.784 13:47:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.784 13:47:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.784 13:47:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.784 13:47:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.784 13:47:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.784 13:47:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:36.784 13:47:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:36.784 13:47:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.784 13:47:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.784 13:47:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.784 13:47:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.784 13:47:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:36.784 13:47:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.784 13:47:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.784 13:47:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.784 13:47:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.784 13:47:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.785 13:47:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.785 13:47:39 -- paths/export.sh@5 -- # export PATH 00:18:36.785 13:47:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.785 13:47:39 -- nvmf/common.sh@47 -- # : 0 00:18:36.785 13:47:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:36.785 13:47:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:36.785 13:47:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.785 13:47:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.785 13:47:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.785 13:47:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:36.785 13:47:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:36.785 13:47:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:36.785 13:47:39 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:36.785 13:47:39 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:36.785 13:47:39 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:36.785 13:47:39 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:36.785 13:47:39 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:36.785 13:47:39 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:36.785 13:47:39 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:36.785 13:47:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:36.785 13:47:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.785 13:47:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:36.785 13:47:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:36.785 13:47:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:36.785 13:47:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.785 13:47:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.785 13:47:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.785 13:47:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:36.785 13:47:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:36.785 13:47:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:36.785 13:47:39 -- common/autotest_common.sh@10 -- # set +x 00:18:38.688 13:47:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:38.688 13:47:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:38.688 13:47:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:38.688 13:47:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:38.688 13:47:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:38.688 13:47:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:38.688 13:47:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:38.688 13:47:41 -- nvmf/common.sh@295 -- # net_devs=() 00:18:38.688 13:47:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:38.688 13:47:41 -- nvmf/common.sh@296 -- # e810=() 00:18:38.688 13:47:41 -- nvmf/common.sh@296 -- # local -ga e810 00:18:38.688 13:47:41 -- nvmf/common.sh@297 -- # x722=() 00:18:38.688 13:47:41 -- nvmf/common.sh@297 -- # local -ga x722 00:18:38.688 13:47:41 -- nvmf/common.sh@298 -- # mlx=() 00:18:38.688 13:47:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:38.688 13:47:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:38.688 13:47:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:38.688 13:47:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:38.688 13:47:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:38.688 13:47:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:38.688 13:47:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:38.688 13:47:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:38.688 13:47:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:38.688 13:47:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:38.688 13:47:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:38.688 13:47:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:38.688 13:47:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:38.688 13:47:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:38.688 13:47:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:38.688 13:47:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:38.688 13:47:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:38.688 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:38.688 13:47:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:38.688 13:47:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:38.688 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:38.688 13:47:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:38.688 13:47:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:38.688 13:47:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.688 13:47:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:38.688 13:47:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.688 13:47:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:38.688 Found net devices under 0000:84:00.0: cvl_0_0 00:18:38.688 13:47:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.688 13:47:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:38.688 13:47:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.688 13:47:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:38.688 13:47:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.688 13:47:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:38.688 Found net devices under 0000:84:00.1: cvl_0_1 00:18:38.688 13:47:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.688 13:47:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:38.688 13:47:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:38.688 13:47:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:38.688 13:47:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:38.688 13:47:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:38.688 13:47:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:38.688 13:47:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:38.688 13:47:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:38.688 13:47:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:38.688 13:47:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:38.688 13:47:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:38.688 13:47:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:38.688 13:47:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.688 13:47:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:38.688 13:47:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:38.688 13:47:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:38.688 13:47:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:38.688 13:47:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:38.688 13:47:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:38.688 13:47:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:38.688 13:47:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:38.688 13:47:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:38.688 13:47:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:38.688 13:47:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:38.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:38.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:18:38.688 00:18:38.688 --- 10.0.0.2 ping statistics --- 00:18:38.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.688 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:18:38.688 13:47:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:38.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:38.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:18:38.688 00:18:38.688 --- 10.0.0.1 ping statistics --- 00:18:38.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.688 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:18:38.688 13:47:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:38.688 13:47:41 -- nvmf/common.sh@411 -- # return 0 00:18:38.688 13:47:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:38.946 13:47:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:38.946 13:47:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:38.946 13:47:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:38.946 13:47:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:38.946 13:47:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:38.946 13:47:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:38.946 13:47:41 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:38.946 13:47:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:38.946 13:47:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:38.946 13:47:41 -- common/autotest_common.sh@10 -- # set +x 00:18:38.946 13:47:41 -- nvmf/common.sh@470 -- # nvmfpid=2655878 00:18:38.946 13:47:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:38.946 13:47:41 -- nvmf/common.sh@471 -- # waitforlisten 2655878 00:18:38.946 13:47:41 -- common/autotest_common.sh@817 -- # '[' -z 2655878 ']' 00:18:38.946 13:47:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.946 13:47:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:38.947 13:47:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.947 13:47:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:38.947 13:47:41 -- common/autotest_common.sh@10 -- # set +x 00:18:38.947 [2024-04-18 13:47:41.562391] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:18:38.947 [2024-04-18 13:47:41.562476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.947 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.947 [2024-04-18 13:47:41.631656] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.947 [2024-04-18 13:47:41.748791] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.947 [2024-04-18 13:47:41.748849] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.947 [2024-04-18 13:47:41.748863] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.947 [2024-04-18 13:47:41.748875] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.947 [2024-04-18 13:47:41.748902] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.947 [2024-04-18 13:47:41.748929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.205 13:47:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:39.205 13:47:41 -- common/autotest_common.sh@850 -- # return 0 00:18:39.205 13:47:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:39.205 13:47:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:39.205 13:47:41 -- common/autotest_common.sh@10 -- # set +x 00:18:39.205 13:47:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.205 13:47:41 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:39.205 13:47:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.205 13:47:41 -- common/autotest_common.sh@10 -- # set +x 00:18:39.205 [2024-04-18 13:47:41.905849] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.205 [2024-04-18 13:47:41.914015] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:39.205 null0 00:18:39.205 [2024-04-18 13:47:41.945951] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.205 13:47:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:39.205 13:47:41 -- host/discovery_remove_ifc.sh@59 -- # hostpid=2655903 00:18:39.205 13:47:41 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2655903 /tmp/host.sock 00:18:39.205 13:47:41 -- common/autotest_common.sh@817 -- # '[' -z 2655903 ']' 00:18:39.205 13:47:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:18:39.205 13:47:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:39.205 13:47:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:39.205 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:39.205 13:47:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:39.205 13:47:41 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:39.205 13:47:41 -- common/autotest_common.sh@10 -- # set +x 00:18:39.205 [2024-04-18 13:47:42.011681] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:18:39.205 [2024-04-18 13:47:42.011770] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2655903 ] 00:18:39.463 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.463 [2024-04-18 13:47:42.070946] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.463 [2024-04-18 13:47:42.178224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.463 13:47:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:39.463 13:47:42 -- common/autotest_common.sh@850 -- # return 0 00:18:39.463 13:47:42 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:39.463 13:47:42 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:39.463 13:47:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.463 13:47:42 -- common/autotest_common.sh@10 -- # set +x 00:18:39.463 13:47:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:39.463 13:47:42 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:39.463 13:47:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.463 13:47:42 -- common/autotest_common.sh@10 -- # set +x 00:18:39.722 13:47:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:39.722 13:47:42 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:39.722 13:47:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.722 13:47:42 -- common/autotest_common.sh@10 -- # set +x 00:18:40.655 [2024-04-18 13:47:43.365381] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:40.655 [2024-04-18 13:47:43.365410] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:40.655 [2024-04-18 13:47:43.365434] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:40.655 [2024-04-18 13:47:43.451757] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:40.914 [2024-04-18 13:47:43.555420] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:40.914 [2024-04-18 13:47:43.555498] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:40.914 [2024-04-18 13:47:43.555544] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:40.914 [2024-04-18 13:47:43.555571] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:40.914 [2024-04-18 13:47:43.555606] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:40.914 13:47:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:40.914 13:47:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:40.914 13:47:43 -- common/autotest_common.sh@10 -- # set +x 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:40.914 [2024-04-18 13:47:43.561807] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10c9700 was disconnected and freed. delete nvme_qpair. 00:18:40.914 13:47:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:40.914 13:47:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:40.914 13:47:43 -- common/autotest_common.sh@10 -- # set +x 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:40.914 13:47:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:40.914 13:47:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:42.289 13:47:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:42.289 13:47:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:42.289 13:47:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:42.289 13:47:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.289 13:47:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:42.289 13:47:44 -- common/autotest_common.sh@10 -- # set +x 00:18:42.289 13:47:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:42.289 13:47:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.289 13:47:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:42.289 13:47:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:43.248 13:47:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:43.248 13:47:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:43.248 13:47:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:43.248 13:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.248 13:47:45 -- common/autotest_common.sh@10 -- # set +x 00:18:43.248 13:47:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:43.248 13:47:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:43.248 13:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.248 13:47:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:43.248 13:47:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:44.183 13:47:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:44.183 13:47:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:44.183 13:47:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:44.183 13:47:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:44.183 13:47:46 -- common/autotest_common.sh@10 -- # set +x 00:18:44.183 13:47:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:44.183 13:47:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:44.183 13:47:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:44.183 13:47:46 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:44.183 13:47:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:45.116 13:47:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:45.116 13:47:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:45.116 13:47:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:45.116 13:47:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.116 13:47:47 -- common/autotest_common.sh@10 -- # set +x 00:18:45.116 13:47:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:45.116 13:47:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:45.116 13:47:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.116 13:47:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:45.116 13:47:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:46.489 13:47:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:46.489 13:47:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:46.490 13:47:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:46.490 13:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.490 13:47:48 -- common/autotest_common.sh@10 -- # set +x 00:18:46.490 13:47:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:46.490 13:47:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:46.490 13:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.490 13:47:48 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:46.490 13:47:48 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:46.490 [2024-04-18 13:47:48.996985] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:46.490 [2024-04-18 13:47:48.997070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.490 [2024-04-18 13:47:48.997106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.490 [2024-04-18 13:47:48.997127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.490 [2024-04-18 13:47:48.997143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.490 [2024-04-18 13:47:48.997169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.490 [2024-04-18 13:47:48.997193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.490 [2024-04-18 13:47:48.997225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.490 [2024-04-18 13:47:48.997238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.490 [2024-04-18 13:47:48.997251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.490 [2024-04-18 13:47:48.997264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.490 [2024-04-18 13:47:48.997277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108fb90 is same with the state(5) to be set 00:18:46.490 [2024-04-18 13:47:49.006999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108fb90 (9): Bad file descriptor 00:18:46.490 [2024-04-18 13:47:49.017049] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:47.422 13:47:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:47.422 13:47:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:47.422 13:47:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.422 13:47:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:47.422 13:47:49 -- common/autotest_common.sh@10 -- # set +x 00:18:47.422 13:47:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:47.422 13:47:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:47.422 [2024-04-18 13:47:50.067265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:48.356 [2024-04-18 13:47:51.091227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:48.356 [2024-04-18 13:47:51.091319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108fb90 with addr=10.0.0.2, port=4420 00:18:48.356 [2024-04-18 13:47:51.091351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108fb90 is same with the state(5) to be set 00:18:48.356 [2024-04-18 13:47:51.091902] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108fb90 (9): Bad file descriptor 00:18:48.356 [2024-04-18 13:47:51.091960] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:48.356 [2024-04-18 13:47:51.092010] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:18:48.356 [2024-04-18 13:47:51.092059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.356 [2024-04-18 13:47:51.092083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.356 [2024-04-18 13:47:51.092106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.356 [2024-04-18 13:47:51.092122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.356 [2024-04-18 13:47:51.092137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.357 [2024-04-18 13:47:51.092152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.357 [2024-04-18 13:47:51.092167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.357 [2024-04-18 13:47:51.092201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.357 [2024-04-18 13:47:51.092219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.357 [2024-04-18 13:47:51.092248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.357 [2024-04-18 13:47:51.092263] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:18:48.357 [2024-04-18 13:47:51.092394] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108ffa0 (9): Bad file descriptor 00:18:48.357 [2024-04-18 13:47:51.093418] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:48.357 [2024-04-18 13:47:51.093441] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:18:48.357 13:47:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.357 13:47:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:48.357 13:47:51 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:49.731 13:47:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:49.731 13:47:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:49.731 13:47:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:49.731 13:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.731 13:47:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:49.731 13:47:52 -- common/autotest_common.sh@10 -- # set +x 00:18:49.731 13:47:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:49.731 13:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.731 13:47:52 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:49.731 13:47:52 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:49.731 13:47:52 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:49.731 13:47:52 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:49.731 13:47:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:49.731 13:47:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:49.731 13:47:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:49.731 13:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.731 13:47:52 -- common/autotest_common.sh@10 -- # set +x 00:18:49.731 13:47:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:49.731 13:47:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:49.731 13:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.731 13:47:52 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:49.731 13:47:52 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:50.664 [2024-04-18 13:47:53.109611] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:50.664 [2024-04-18 13:47:53.109656] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:50.664 [2024-04-18 13:47:53.109685] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:50.664 [2024-04-18 13:47:53.197941] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:18:50.664 13:47:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:50.664 13:47:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:50.664 13:47:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:50.664 13:47:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.664 13:47:53 -- common/autotest_common.sh@10 -- # set +x 00:18:50.664 13:47:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:50.664 13:47:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:50.664 13:47:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.664 [2024-04-18 13:47:53.258974] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:50.664 [2024-04-18 13:47:53.259028] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:50.664 [2024-04-18 13:47:53.259065] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:50.664 [2024-04-18 13:47:53.259090] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:18:50.664 [2024-04-18 13:47:53.259104] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:50.664 [2024-04-18 13:47:53.267633] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x109d850 was disconnected and freed. delete nvme_qpair. 00:18:50.664 13:47:53 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:50.664 13:47:53 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:51.598 13:47:54 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:51.598 13:47:54 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:51.598 13:47:54 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:51.598 13:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.598 13:47:54 -- common/autotest_common.sh@10 -- # set +x 00:18:51.598 13:47:54 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:51.598 13:47:54 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:51.598 13:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.598 13:47:54 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:51.598 13:47:54 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:51.598 13:47:54 -- host/discovery_remove_ifc.sh@90 -- # killprocess 2655903 00:18:51.598 13:47:54 -- common/autotest_common.sh@936 -- # '[' -z 2655903 ']' 00:18:51.598 13:47:54 -- common/autotest_common.sh@940 -- # kill -0 2655903 00:18:51.598 13:47:54 -- common/autotest_common.sh@941 -- # uname 00:18:51.598 13:47:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:51.598 13:47:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2655903 00:18:51.598 13:47:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:51.598 13:47:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:51.598 13:47:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2655903' 00:18:51.598 killing process with pid 2655903 00:18:51.598 13:47:54 -- common/autotest_common.sh@955 -- # kill 2655903 00:18:51.598 13:47:54 -- common/autotest_common.sh@960 -- # wait 2655903 00:18:51.856 13:47:54 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:51.856 13:47:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:51.856 13:47:54 -- nvmf/common.sh@117 -- # sync 00:18:51.856 13:47:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:51.856 13:47:54 -- nvmf/common.sh@120 -- # set +e 00:18:51.856 13:47:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:51.856 13:47:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:51.856 rmmod nvme_tcp 00:18:51.856 rmmod nvme_fabrics 00:18:52.114 rmmod nvme_keyring 00:18:52.114 13:47:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:52.114 13:47:54 -- nvmf/common.sh@124 -- # set -e 00:18:52.114 13:47:54 -- nvmf/common.sh@125 -- # return 0 00:18:52.114 13:47:54 -- nvmf/common.sh@478 -- # '[' -n 2655878 ']' 00:18:52.114 13:47:54 -- nvmf/common.sh@479 -- # killprocess 2655878 00:18:52.114 13:47:54 -- common/autotest_common.sh@936 -- # '[' -z 2655878 ']' 00:18:52.114 13:47:54 -- common/autotest_common.sh@940 -- # kill -0 2655878 00:18:52.114 13:47:54 -- common/autotest_common.sh@941 -- # uname 00:18:52.114 13:47:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:52.114 13:47:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2655878 00:18:52.114 13:47:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:52.114 13:47:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:52.114 13:47:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2655878' 00:18:52.114 killing process with pid 2655878 00:18:52.114 13:47:54 -- common/autotest_common.sh@955 -- # kill 2655878 00:18:52.114 13:47:54 -- common/autotest_common.sh@960 -- # wait 2655878 00:18:52.372 13:47:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:52.372 13:47:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:52.372 13:47:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:52.372 13:47:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:52.372 13:47:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:52.372 13:47:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.372 13:47:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.372 13:47:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.277 13:47:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:54.277 00:18:54.277 real 0m17.657s 00:18:54.277 user 0m24.526s 00:18:54.277 sys 0m2.976s 00:18:54.277 13:47:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:54.277 13:47:57 -- common/autotest_common.sh@10 -- # set +x 00:18:54.277 ************************************ 00:18:54.277 END TEST nvmf_discovery_remove_ifc 00:18:54.277 ************************************ 00:18:54.277 13:47:57 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:54.277 13:47:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:54.277 13:47:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:54.277 13:47:57 -- common/autotest_common.sh@10 -- # set +x 00:18:54.534 ************************************ 00:18:54.534 START TEST nvmf_identify_kernel_target 00:18:54.534 ************************************ 00:18:54.534 13:47:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:54.534 * Looking for test storage... 00:18:54.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:54.534 13:47:57 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:54.534 13:47:57 -- nvmf/common.sh@7 -- # uname -s 00:18:54.534 13:47:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.534 13:47:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.534 13:47:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.534 13:47:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.534 13:47:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.534 13:47:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.534 13:47:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.534 13:47:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.534 13:47:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.534 13:47:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.534 13:47:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:54.534 13:47:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:54.534 13:47:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.534 13:47:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.534 13:47:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.534 13:47:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.534 13:47:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:54.534 13:47:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.534 13:47:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.534 13:47:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.534 13:47:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.535 13:47:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.535 13:47:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.535 13:47:57 -- paths/export.sh@5 -- # export PATH 00:18:54.535 13:47:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.535 13:47:57 -- nvmf/common.sh@47 -- # : 0 00:18:54.535 13:47:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:54.535 13:47:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:54.535 13:47:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.535 13:47:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.535 13:47:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.535 13:47:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:54.535 13:47:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:54.535 13:47:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:54.535 13:47:57 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:54.535 13:47:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:54.535 13:47:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.535 13:47:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:54.535 13:47:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:54.535 13:47:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:54.535 13:47:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.535 13:47:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.535 13:47:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.535 13:47:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:54.535 13:47:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:54.535 13:47:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:54.535 13:47:57 -- common/autotest_common.sh@10 -- # set +x 00:18:57.064 13:47:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:57.064 13:47:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:57.064 13:47:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:57.064 13:47:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:57.064 13:47:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:57.064 13:47:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:57.064 13:47:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:57.064 13:47:59 -- nvmf/common.sh@295 -- # net_devs=() 00:18:57.064 13:47:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:57.064 13:47:59 -- nvmf/common.sh@296 -- # e810=() 00:18:57.064 13:47:59 -- nvmf/common.sh@296 -- # local -ga e810 00:18:57.064 13:47:59 -- nvmf/common.sh@297 -- # x722=() 00:18:57.064 13:47:59 -- nvmf/common.sh@297 -- # local -ga x722 00:18:57.064 13:47:59 -- nvmf/common.sh@298 -- # mlx=() 00:18:57.064 13:47:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:57.064 13:47:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.064 13:47:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.064 13:47:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.064 13:47:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.064 13:47:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.064 13:47:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.064 13:47:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.064 13:47:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.064 13:47:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.064 13:47:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.064 13:47:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.064 13:47:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:57.064 13:47:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:57.064 13:47:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:57.064 13:47:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.064 13:47:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:57.064 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:57.064 13:47:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.064 13:47:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:57.064 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:57.064 13:47:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:57.064 13:47:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.064 13:47:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.064 13:47:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:57.064 13:47:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.064 13:47:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:57.064 Found net devices under 0000:84:00.0: cvl_0_0 00:18:57.064 13:47:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.064 13:47:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.064 13:47:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.064 13:47:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:57.064 13:47:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.064 13:47:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:57.064 Found net devices under 0000:84:00.1: cvl_0_1 00:18:57.064 13:47:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.064 13:47:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:57.064 13:47:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:57.064 13:47:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:57.064 13:47:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:57.064 13:47:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.064 13:47:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.064 13:47:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.064 13:47:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:57.064 13:47:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.064 13:47:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.064 13:47:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:57.064 13:47:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.064 13:47:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.064 13:47:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:57.064 13:47:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:57.064 13:47:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.064 13:47:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.064 13:47:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.064 13:47:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.064 13:47:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:57.064 13:47:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.064 13:47:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.064 13:47:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.064 13:47:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:57.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:18:57.064 00:18:57.064 --- 10.0.0.2 ping statistics --- 00:18:57.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.064 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:18:57.064 13:47:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:18:57.064 00:18:57.064 --- 10.0.0.1 ping statistics --- 00:18:57.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.064 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:18:57.064 13:47:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.064 13:47:59 -- nvmf/common.sh@411 -- # return 0 00:18:57.064 13:47:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:57.064 13:47:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.065 13:47:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:57.065 13:47:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:57.065 13:47:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.065 13:47:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:57.065 13:47:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:57.065 13:47:59 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:57.065 13:47:59 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:57.065 13:47:59 -- nvmf/common.sh@717 -- # local ip 00:18:57.065 13:47:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:57.065 13:47:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:57.065 13:47:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.065 13:47:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.065 13:47:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:57.065 13:47:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.065 13:47:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:57.065 13:47:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:57.065 13:47:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:57.065 13:47:59 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:57.065 13:47:59 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:57.065 13:47:59 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:57.065 13:47:59 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:18:57.065 13:47:59 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:57.065 13:47:59 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:57.065 13:47:59 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:57.065 13:47:59 -- nvmf/common.sh@628 -- # local block nvme 00:18:57.065 13:47:59 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:18:57.065 13:47:59 -- nvmf/common.sh@631 -- # modprobe nvmet 00:18:57.065 13:47:59 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:57.065 13:47:59 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:18:57.998 Waiting for block devices as requested 00:18:57.998 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:18:57.998 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:57.998 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:57.998 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:58.255 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:58.255 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:58.255 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:58.255 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:58.513 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:58.513 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:58.513 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:58.513 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:58.513 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:58.796 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:58.796 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:58.796 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:58.796 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:59.056 13:48:01 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:18:59.056 13:48:01 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:59.056 13:48:01 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:18:59.056 13:48:01 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:18:59.056 13:48:01 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:59.056 13:48:01 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:59.056 13:48:01 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:18:59.056 13:48:01 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:59.056 13:48:01 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:18:59.056 No valid GPT data, bailing 00:18:59.056 13:48:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:59.056 13:48:01 -- scripts/common.sh@391 -- # pt= 00:18:59.056 13:48:01 -- scripts/common.sh@392 -- # return 1 00:18:59.056 13:48:01 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:18:59.056 13:48:01 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:18:59.056 13:48:01 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:59.056 13:48:01 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:59.056 13:48:01 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:59.056 13:48:01 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:59.056 13:48:01 -- nvmf/common.sh@656 -- # echo 1 00:18:59.056 13:48:01 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:18:59.056 13:48:01 -- nvmf/common.sh@658 -- # echo 1 00:18:59.056 13:48:01 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:18:59.056 13:48:01 -- nvmf/common.sh@661 -- # echo tcp 00:18:59.056 13:48:01 -- nvmf/common.sh@662 -- # echo 4420 00:18:59.056 13:48:01 -- nvmf/common.sh@663 -- # echo ipv4 00:18:59.056 13:48:01 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:59.056 13:48:01 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:18:59.056 00:18:59.056 Discovery Log Number of Records 2, Generation counter 2 00:18:59.056 =====Discovery Log Entry 0====== 00:18:59.056 trtype: tcp 00:18:59.056 adrfam: ipv4 00:18:59.056 subtype: current discovery subsystem 00:18:59.056 treq: not specified, sq flow control disable supported 00:18:59.056 portid: 1 00:18:59.056 trsvcid: 4420 00:18:59.056 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:59.056 traddr: 10.0.0.1 00:18:59.056 eflags: none 00:18:59.056 sectype: none 00:18:59.056 =====Discovery Log Entry 1====== 00:18:59.056 trtype: tcp 00:18:59.056 adrfam: ipv4 00:18:59.056 subtype: nvme subsystem 00:18:59.056 treq: not specified, sq flow control disable supported 00:18:59.056 portid: 1 00:18:59.056 trsvcid: 4420 00:18:59.056 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:59.056 traddr: 10.0.0.1 00:18:59.056 eflags: none 00:18:59.056 sectype: none 00:18:59.056 13:48:01 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:59.056 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:59.056 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.056 ===================================================== 00:18:59.056 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:59.056 ===================================================== 00:18:59.056 Controller Capabilities/Features 00:18:59.056 ================================ 00:18:59.056 Vendor ID: 0000 00:18:59.056 Subsystem Vendor ID: 0000 00:18:59.056 Serial Number: f6d5141d5c761e44bbcf 00:18:59.056 Model Number: Linux 00:18:59.056 Firmware Version: 6.7.0-68 00:18:59.056 Recommended Arb Burst: 0 00:18:59.056 IEEE OUI Identifier: 00 00 00 00:18:59.056 Multi-path I/O 00:18:59.056 May have multiple subsystem ports: No 00:18:59.056 May have multiple controllers: No 00:18:59.056 Associated with SR-IOV VF: No 00:18:59.056 Max Data Transfer Size: Unlimited 00:18:59.056 Max Number of Namespaces: 0 00:18:59.056 Max Number of I/O Queues: 1024 00:18:59.056 NVMe Specification Version (VS): 1.3 00:18:59.056 NVMe Specification Version (Identify): 1.3 00:18:59.057 Maximum Queue Entries: 1024 00:18:59.057 Contiguous Queues Required: No 00:18:59.057 Arbitration Mechanisms Supported 00:18:59.057 Weighted Round Robin: Not Supported 00:18:59.057 Vendor Specific: Not Supported 00:18:59.057 Reset Timeout: 7500 ms 00:18:59.057 Doorbell Stride: 4 bytes 00:18:59.057 NVM Subsystem Reset: Not Supported 00:18:59.057 Command Sets Supported 00:18:59.057 NVM Command Set: Supported 00:18:59.057 Boot Partition: Not Supported 00:18:59.057 Memory Page Size Minimum: 4096 bytes 00:18:59.057 Memory Page Size Maximum: 4096 bytes 00:18:59.057 Persistent Memory Region: Not Supported 00:18:59.057 Optional Asynchronous Events Supported 00:18:59.057 Namespace Attribute Notices: Not Supported 00:18:59.057 Firmware Activation Notices: Not Supported 00:18:59.057 ANA Change Notices: Not Supported 00:18:59.057 PLE Aggregate Log Change Notices: Not Supported 00:18:59.057 LBA Status Info Alert Notices: Not Supported 00:18:59.057 EGE Aggregate Log Change Notices: Not Supported 00:18:59.057 Normal NVM Subsystem Shutdown event: Not Supported 00:18:59.057 Zone Descriptor Change Notices: Not Supported 00:18:59.057 Discovery Log Change Notices: Supported 00:18:59.057 Controller Attributes 00:18:59.057 128-bit Host Identifier: Not Supported 00:18:59.057 Non-Operational Permissive Mode: Not Supported 00:18:59.057 NVM Sets: Not Supported 00:18:59.057 Read Recovery Levels: Not Supported 00:18:59.057 Endurance Groups: Not Supported 00:18:59.057 Predictable Latency Mode: Not Supported 00:18:59.057 Traffic Based Keep ALive: Not Supported 00:18:59.057 Namespace Granularity: Not Supported 00:18:59.057 SQ Associations: Not Supported 00:18:59.057 UUID List: Not Supported 00:18:59.057 Multi-Domain Subsystem: Not Supported 00:18:59.057 Fixed Capacity Management: Not Supported 00:18:59.057 Variable Capacity Management: Not Supported 00:18:59.057 Delete Endurance Group: Not Supported 00:18:59.057 Delete NVM Set: Not Supported 00:18:59.057 Extended LBA Formats Supported: Not Supported 00:18:59.057 Flexible Data Placement Supported: Not Supported 00:18:59.057 00:18:59.057 Controller Memory Buffer Support 00:18:59.057 ================================ 00:18:59.057 Supported: No 00:18:59.057 00:18:59.057 Persistent Memory Region Support 00:18:59.057 ================================ 00:18:59.057 Supported: No 00:18:59.057 00:18:59.057 Admin Command Set Attributes 00:18:59.057 ============================ 00:18:59.057 Security Send/Receive: Not Supported 00:18:59.057 Format NVM: Not Supported 00:18:59.057 Firmware Activate/Download: Not Supported 00:18:59.057 Namespace Management: Not Supported 00:18:59.057 Device Self-Test: Not Supported 00:18:59.057 Directives: Not Supported 00:18:59.057 NVMe-MI: Not Supported 00:18:59.057 Virtualization Management: Not Supported 00:18:59.057 Doorbell Buffer Config: Not Supported 00:18:59.057 Get LBA Status Capability: Not Supported 00:18:59.057 Command & Feature Lockdown Capability: Not Supported 00:18:59.057 Abort Command Limit: 1 00:18:59.057 Async Event Request Limit: 1 00:18:59.057 Number of Firmware Slots: N/A 00:18:59.057 Firmware Slot 1 Read-Only: N/A 00:18:59.057 Firmware Activation Without Reset: N/A 00:18:59.057 Multiple Update Detection Support: N/A 00:18:59.057 Firmware Update Granularity: No Information Provided 00:18:59.057 Per-Namespace SMART Log: No 00:18:59.057 Asymmetric Namespace Access Log Page: Not Supported 00:18:59.057 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:59.057 Command Effects Log Page: Not Supported 00:18:59.057 Get Log Page Extended Data: Supported 00:18:59.057 Telemetry Log Pages: Not Supported 00:18:59.057 Persistent Event Log Pages: Not Supported 00:18:59.057 Supported Log Pages Log Page: May Support 00:18:59.057 Commands Supported & Effects Log Page: Not Supported 00:18:59.057 Feature Identifiers & Effects Log Page:May Support 00:18:59.057 NVMe-MI Commands & Effects Log Page: May Support 00:18:59.057 Data Area 4 for Telemetry Log: Not Supported 00:18:59.057 Error Log Page Entries Supported: 1 00:18:59.057 Keep Alive: Not Supported 00:18:59.057 00:18:59.057 NVM Command Set Attributes 00:18:59.057 ========================== 00:18:59.057 Submission Queue Entry Size 00:18:59.057 Max: 1 00:18:59.057 Min: 1 00:18:59.057 Completion Queue Entry Size 00:18:59.057 Max: 1 00:18:59.057 Min: 1 00:18:59.057 Number of Namespaces: 0 00:18:59.057 Compare Command: Not Supported 00:18:59.057 Write Uncorrectable Command: Not Supported 00:18:59.057 Dataset Management Command: Not Supported 00:18:59.057 Write Zeroes Command: Not Supported 00:18:59.057 Set Features Save Field: Not Supported 00:18:59.057 Reservations: Not Supported 00:18:59.057 Timestamp: Not Supported 00:18:59.057 Copy: Not Supported 00:18:59.057 Volatile Write Cache: Not Present 00:18:59.057 Atomic Write Unit (Normal): 1 00:18:59.057 Atomic Write Unit (PFail): 1 00:18:59.057 Atomic Compare & Write Unit: 1 00:18:59.057 Fused Compare & Write: Not Supported 00:18:59.057 Scatter-Gather List 00:18:59.057 SGL Command Set: Supported 00:18:59.057 SGL Keyed: Not Supported 00:18:59.057 SGL Bit Bucket Descriptor: Not Supported 00:18:59.057 SGL Metadata Pointer: Not Supported 00:18:59.057 Oversized SGL: Not Supported 00:18:59.057 SGL Metadata Address: Not Supported 00:18:59.057 SGL Offset: Supported 00:18:59.057 Transport SGL Data Block: Not Supported 00:18:59.057 Replay Protected Memory Block: Not Supported 00:18:59.057 00:18:59.057 Firmware Slot Information 00:18:59.057 ========================= 00:18:59.057 Active slot: 0 00:18:59.057 00:18:59.057 00:18:59.057 Error Log 00:18:59.057 ========= 00:18:59.057 00:18:59.057 Active Namespaces 00:18:59.057 ================= 00:18:59.057 Discovery Log Page 00:18:59.057 ================== 00:18:59.057 Generation Counter: 2 00:18:59.057 Number of Records: 2 00:18:59.057 Record Format: 0 00:18:59.057 00:18:59.057 Discovery Log Entry 0 00:18:59.057 ---------------------- 00:18:59.057 Transport Type: 3 (TCP) 00:18:59.057 Address Family: 1 (IPv4) 00:18:59.057 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:59.057 Entry Flags: 00:18:59.057 Duplicate Returned Information: 0 00:18:59.057 Explicit Persistent Connection Support for Discovery: 0 00:18:59.057 Transport Requirements: 00:18:59.057 Secure Channel: Not Specified 00:18:59.057 Port ID: 1 (0x0001) 00:18:59.057 Controller ID: 65535 (0xffff) 00:18:59.057 Admin Max SQ Size: 32 00:18:59.057 Transport Service Identifier: 4420 00:18:59.057 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:59.057 Transport Address: 10.0.0.1 00:18:59.057 Discovery Log Entry 1 00:18:59.057 ---------------------- 00:18:59.057 Transport Type: 3 (TCP) 00:18:59.057 Address Family: 1 (IPv4) 00:18:59.057 Subsystem Type: 2 (NVM Subsystem) 00:18:59.057 Entry Flags: 00:18:59.057 Duplicate Returned Information: 0 00:18:59.057 Explicit Persistent Connection Support for Discovery: 0 00:18:59.057 Transport Requirements: 00:18:59.057 Secure Channel: Not Specified 00:18:59.057 Port ID: 1 (0x0001) 00:18:59.057 Controller ID: 65535 (0xffff) 00:18:59.057 Admin Max SQ Size: 32 00:18:59.057 Transport Service Identifier: 4420 00:18:59.057 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:59.057 Transport Address: 10.0.0.1 00:18:59.057 13:48:01 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:59.057 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.057 get_feature(0x01) failed 00:18:59.057 get_feature(0x02) failed 00:18:59.057 get_feature(0x04) failed 00:18:59.057 ===================================================== 00:18:59.057 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:59.057 ===================================================== 00:18:59.057 Controller Capabilities/Features 00:18:59.057 ================================ 00:18:59.057 Vendor ID: 0000 00:18:59.057 Subsystem Vendor ID: 0000 00:18:59.057 Serial Number: c80aed41019093230c5c 00:18:59.057 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:59.057 Firmware Version: 6.7.0-68 00:18:59.057 Recommended Arb Burst: 6 00:18:59.057 IEEE OUI Identifier: 00 00 00 00:18:59.057 Multi-path I/O 00:18:59.057 May have multiple subsystem ports: Yes 00:18:59.057 May have multiple controllers: Yes 00:18:59.057 Associated with SR-IOV VF: No 00:18:59.057 Max Data Transfer Size: Unlimited 00:18:59.057 Max Number of Namespaces: 1024 00:18:59.057 Max Number of I/O Queues: 128 00:18:59.057 NVMe Specification Version (VS): 1.3 00:18:59.057 NVMe Specification Version (Identify): 1.3 00:18:59.057 Maximum Queue Entries: 1024 00:18:59.057 Contiguous Queues Required: No 00:18:59.057 Arbitration Mechanisms Supported 00:18:59.057 Weighted Round Robin: Not Supported 00:18:59.057 Vendor Specific: Not Supported 00:18:59.057 Reset Timeout: 7500 ms 00:18:59.058 Doorbell Stride: 4 bytes 00:18:59.058 NVM Subsystem Reset: Not Supported 00:18:59.058 Command Sets Supported 00:18:59.058 NVM Command Set: Supported 00:18:59.058 Boot Partition: Not Supported 00:18:59.058 Memory Page Size Minimum: 4096 bytes 00:18:59.058 Memory Page Size Maximum: 4096 bytes 00:18:59.058 Persistent Memory Region: Not Supported 00:18:59.058 Optional Asynchronous Events Supported 00:18:59.058 Namespace Attribute Notices: Supported 00:18:59.058 Firmware Activation Notices: Not Supported 00:18:59.058 ANA Change Notices: Supported 00:18:59.058 PLE Aggregate Log Change Notices: Not Supported 00:18:59.058 LBA Status Info Alert Notices: Not Supported 00:18:59.058 EGE Aggregate Log Change Notices: Not Supported 00:18:59.058 Normal NVM Subsystem Shutdown event: Not Supported 00:18:59.058 Zone Descriptor Change Notices: Not Supported 00:18:59.058 Discovery Log Change Notices: Not Supported 00:18:59.058 Controller Attributes 00:18:59.058 128-bit Host Identifier: Supported 00:18:59.058 Non-Operational Permissive Mode: Not Supported 00:18:59.058 NVM Sets: Not Supported 00:18:59.058 Read Recovery Levels: Not Supported 00:18:59.058 Endurance Groups: Not Supported 00:18:59.058 Predictable Latency Mode: Not Supported 00:18:59.058 Traffic Based Keep ALive: Supported 00:18:59.058 Namespace Granularity: Not Supported 00:18:59.058 SQ Associations: Not Supported 00:18:59.058 UUID List: Not Supported 00:18:59.058 Multi-Domain Subsystem: Not Supported 00:18:59.058 Fixed Capacity Management: Not Supported 00:18:59.058 Variable Capacity Management: Not Supported 00:18:59.058 Delete Endurance Group: Not Supported 00:18:59.058 Delete NVM Set: Not Supported 00:18:59.058 Extended LBA Formats Supported: Not Supported 00:18:59.058 Flexible Data Placement Supported: Not Supported 00:18:59.058 00:18:59.058 Controller Memory Buffer Support 00:18:59.058 ================================ 00:18:59.058 Supported: No 00:18:59.058 00:18:59.058 Persistent Memory Region Support 00:18:59.058 ================================ 00:18:59.058 Supported: No 00:18:59.058 00:18:59.058 Admin Command Set Attributes 00:18:59.058 ============================ 00:18:59.058 Security Send/Receive: Not Supported 00:18:59.058 Format NVM: Not Supported 00:18:59.058 Firmware Activate/Download: Not Supported 00:18:59.058 Namespace Management: Not Supported 00:18:59.058 Device Self-Test: Not Supported 00:18:59.058 Directives: Not Supported 00:18:59.058 NVMe-MI: Not Supported 00:18:59.058 Virtualization Management: Not Supported 00:18:59.058 Doorbell Buffer Config: Not Supported 00:18:59.058 Get LBA Status Capability: Not Supported 00:18:59.058 Command & Feature Lockdown Capability: Not Supported 00:18:59.058 Abort Command Limit: 4 00:18:59.058 Async Event Request Limit: 4 00:18:59.058 Number of Firmware Slots: N/A 00:18:59.058 Firmware Slot 1 Read-Only: N/A 00:18:59.058 Firmware Activation Without Reset: N/A 00:18:59.058 Multiple Update Detection Support: N/A 00:18:59.058 Firmware Update Granularity: No Information Provided 00:18:59.058 Per-Namespace SMART Log: Yes 00:18:59.058 Asymmetric Namespace Access Log Page: Supported 00:18:59.058 ANA Transition Time : 10 sec 00:18:59.058 00:18:59.058 Asymmetric Namespace Access Capabilities 00:18:59.058 ANA Optimized State : Supported 00:18:59.058 ANA Non-Optimized State : Supported 00:18:59.058 ANA Inaccessible State : Supported 00:18:59.058 ANA Persistent Loss State : Supported 00:18:59.058 ANA Change State : Supported 00:18:59.058 ANAGRPID is not changed : No 00:18:59.058 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:59.058 00:18:59.058 ANA Group Identifier Maximum : 128 00:18:59.058 Number of ANA Group Identifiers : 128 00:18:59.058 Max Number of Allowed Namespaces : 1024 00:18:59.058 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:59.058 Command Effects Log Page: Supported 00:18:59.058 Get Log Page Extended Data: Supported 00:18:59.058 Telemetry Log Pages: Not Supported 00:18:59.058 Persistent Event Log Pages: Not Supported 00:18:59.058 Supported Log Pages Log Page: May Support 00:18:59.058 Commands Supported & Effects Log Page: Not Supported 00:18:59.058 Feature Identifiers & Effects Log Page:May Support 00:18:59.058 NVMe-MI Commands & Effects Log Page: May Support 00:18:59.058 Data Area 4 for Telemetry Log: Not Supported 00:18:59.058 Error Log Page Entries Supported: 128 00:18:59.058 Keep Alive: Supported 00:18:59.058 Keep Alive Granularity: 1000 ms 00:18:59.058 00:18:59.058 NVM Command Set Attributes 00:18:59.058 ========================== 00:18:59.058 Submission Queue Entry Size 00:18:59.058 Max: 64 00:18:59.058 Min: 64 00:18:59.058 Completion Queue Entry Size 00:18:59.058 Max: 16 00:18:59.058 Min: 16 00:18:59.058 Number of Namespaces: 1024 00:18:59.058 Compare Command: Not Supported 00:18:59.058 Write Uncorrectable Command: Not Supported 00:18:59.058 Dataset Management Command: Supported 00:18:59.058 Write Zeroes Command: Supported 00:18:59.058 Set Features Save Field: Not Supported 00:18:59.058 Reservations: Not Supported 00:18:59.058 Timestamp: Not Supported 00:18:59.058 Copy: Not Supported 00:18:59.058 Volatile Write Cache: Present 00:18:59.058 Atomic Write Unit (Normal): 1 00:18:59.058 Atomic Write Unit (PFail): 1 00:18:59.058 Atomic Compare & Write Unit: 1 00:18:59.058 Fused Compare & Write: Not Supported 00:18:59.058 Scatter-Gather List 00:18:59.058 SGL Command Set: Supported 00:18:59.058 SGL Keyed: Not Supported 00:18:59.058 SGL Bit Bucket Descriptor: Not Supported 00:18:59.058 SGL Metadata Pointer: Not Supported 00:18:59.058 Oversized SGL: Not Supported 00:18:59.058 SGL Metadata Address: Not Supported 00:18:59.058 SGL Offset: Supported 00:18:59.058 Transport SGL Data Block: Not Supported 00:18:59.058 Replay Protected Memory Block: Not Supported 00:18:59.058 00:18:59.058 Firmware Slot Information 00:18:59.058 ========================= 00:18:59.058 Active slot: 0 00:18:59.058 00:18:59.058 Asymmetric Namespace Access 00:18:59.058 =========================== 00:18:59.058 Change Count : 0 00:18:59.058 Number of ANA Group Descriptors : 1 00:18:59.058 ANA Group Descriptor : 0 00:18:59.058 ANA Group ID : 1 00:18:59.058 Number of NSID Values : 1 00:18:59.058 Change Count : 0 00:18:59.058 ANA State : 1 00:18:59.058 Namespace Identifier : 1 00:18:59.058 00:18:59.058 Commands Supported and Effects 00:18:59.058 ============================== 00:18:59.058 Admin Commands 00:18:59.058 -------------- 00:18:59.058 Get Log Page (02h): Supported 00:18:59.058 Identify (06h): Supported 00:18:59.058 Abort (08h): Supported 00:18:59.058 Set Features (09h): Supported 00:18:59.058 Get Features (0Ah): Supported 00:18:59.058 Asynchronous Event Request (0Ch): Supported 00:18:59.058 Keep Alive (18h): Supported 00:18:59.058 I/O Commands 00:18:59.058 ------------ 00:18:59.058 Flush (00h): Supported 00:18:59.058 Write (01h): Supported LBA-Change 00:18:59.058 Read (02h): Supported 00:18:59.058 Write Zeroes (08h): Supported LBA-Change 00:18:59.058 Dataset Management (09h): Supported 00:18:59.058 00:18:59.058 Error Log 00:18:59.058 ========= 00:18:59.058 Entry: 0 00:18:59.058 Error Count: 0x3 00:18:59.058 Submission Queue Id: 0x0 00:18:59.058 Command Id: 0x5 00:18:59.058 Phase Bit: 0 00:18:59.058 Status Code: 0x2 00:18:59.058 Status Code Type: 0x0 00:18:59.058 Do Not Retry: 1 00:18:59.058 Error Location: 0x28 00:18:59.058 LBA: 0x0 00:18:59.058 Namespace: 0x0 00:18:59.058 Vendor Log Page: 0x0 00:18:59.058 ----------- 00:18:59.058 Entry: 1 00:18:59.058 Error Count: 0x2 00:18:59.058 Submission Queue Id: 0x0 00:18:59.058 Command Id: 0x5 00:18:59.058 Phase Bit: 0 00:18:59.058 Status Code: 0x2 00:18:59.058 Status Code Type: 0x0 00:18:59.058 Do Not Retry: 1 00:18:59.058 Error Location: 0x28 00:18:59.058 LBA: 0x0 00:18:59.058 Namespace: 0x0 00:18:59.058 Vendor Log Page: 0x0 00:18:59.058 ----------- 00:18:59.058 Entry: 2 00:18:59.058 Error Count: 0x1 00:18:59.058 Submission Queue Id: 0x0 00:18:59.058 Command Id: 0x4 00:18:59.058 Phase Bit: 0 00:18:59.058 Status Code: 0x2 00:18:59.058 Status Code Type: 0x0 00:18:59.058 Do Not Retry: 1 00:18:59.058 Error Location: 0x28 00:18:59.058 LBA: 0x0 00:18:59.058 Namespace: 0x0 00:18:59.058 Vendor Log Page: 0x0 00:18:59.058 00:18:59.058 Number of Queues 00:18:59.058 ================ 00:18:59.058 Number of I/O Submission Queues: 128 00:18:59.058 Number of I/O Completion Queues: 128 00:18:59.058 00:18:59.058 ZNS Specific Controller Data 00:18:59.058 ============================ 00:18:59.058 Zone Append Size Limit: 0 00:18:59.058 00:18:59.058 00:18:59.058 Active Namespaces 00:18:59.058 ================= 00:18:59.059 get_feature(0x05) failed 00:18:59.059 Namespace ID:1 00:18:59.059 Command Set Identifier: NVM (00h) 00:18:59.059 Deallocate: Supported 00:18:59.059 Deallocated/Unwritten Error: Not Supported 00:18:59.059 Deallocated Read Value: Unknown 00:18:59.059 Deallocate in Write Zeroes: Not Supported 00:18:59.059 Deallocated Guard Field: 0xFFFF 00:18:59.059 Flush: Supported 00:18:59.059 Reservation: Not Supported 00:18:59.059 Namespace Sharing Capabilities: Multiple Controllers 00:18:59.059 Size (in LBAs): 1953525168 (931GiB) 00:18:59.059 Capacity (in LBAs): 1953525168 (931GiB) 00:18:59.059 Utilization (in LBAs): 1953525168 (931GiB) 00:18:59.059 UUID: fda2f42a-b923-43cb-bfa4-a358c090466a 00:18:59.059 Thin Provisioning: Not Supported 00:18:59.059 Per-NS Atomic Units: Yes 00:18:59.059 Atomic Boundary Size (Normal): 0 00:18:59.059 Atomic Boundary Size (PFail): 0 00:18:59.059 Atomic Boundary Offset: 0 00:18:59.059 NGUID/EUI64 Never Reused: No 00:18:59.059 ANA group ID: 1 00:18:59.059 Namespace Write Protected: No 00:18:59.059 Number of LBA Formats: 1 00:18:59.059 Current LBA Format: LBA Format #00 00:18:59.059 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:59.059 00:18:59.059 13:48:01 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:59.059 13:48:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:59.059 13:48:01 -- nvmf/common.sh@117 -- # sync 00:18:59.059 13:48:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:59.059 13:48:01 -- nvmf/common.sh@120 -- # set +e 00:18:59.059 13:48:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:59.059 13:48:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:59.059 rmmod nvme_tcp 00:18:59.318 rmmod nvme_fabrics 00:18:59.318 13:48:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:59.318 13:48:01 -- nvmf/common.sh@124 -- # set -e 00:18:59.318 13:48:01 -- nvmf/common.sh@125 -- # return 0 00:18:59.318 13:48:01 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:59.318 13:48:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:59.318 13:48:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:59.318 13:48:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:59.318 13:48:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:59.318 13:48:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:59.318 13:48:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.318 13:48:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.318 13:48:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.218 13:48:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:01.218 13:48:03 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:01.218 13:48:03 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:01.218 13:48:03 -- nvmf/common.sh@675 -- # echo 0 00:19:01.218 13:48:03 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:01.218 13:48:03 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:01.218 13:48:03 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:01.218 13:48:03 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:01.218 13:48:03 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:19:01.218 13:48:03 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:19:01.218 13:48:03 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:02.591 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:02.591 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:02.591 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:02.591 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:02.591 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:02.591 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:02.591 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:02.591 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:02.591 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:02.591 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:02.591 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:02.591 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:02.591 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:02.591 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:02.591 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:02.591 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:03.527 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:19:03.527 00:19:03.527 real 0m9.105s 00:19:03.527 user 0m1.924s 00:19:03.527 sys 0m3.333s 00:19:03.527 13:48:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:03.527 13:48:06 -- common/autotest_common.sh@10 -- # set +x 00:19:03.527 ************************************ 00:19:03.527 END TEST nvmf_identify_kernel_target 00:19:03.527 ************************************ 00:19:03.527 13:48:06 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:03.527 13:48:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:03.527 13:48:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:03.527 13:48:06 -- common/autotest_common.sh@10 -- # set +x 00:19:03.786 ************************************ 00:19:03.786 START TEST nvmf_auth 00:19:03.786 ************************************ 00:19:03.786 13:48:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:03.786 * Looking for test storage... 00:19:03.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:03.786 13:48:06 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.786 13:48:06 -- nvmf/common.sh@7 -- # uname -s 00:19:03.786 13:48:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.786 13:48:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.786 13:48:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.786 13:48:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.786 13:48:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.786 13:48:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.786 13:48:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.786 13:48:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.786 13:48:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.786 13:48:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.786 13:48:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:03.786 13:48:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:03.786 13:48:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.786 13:48:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.786 13:48:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.786 13:48:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.786 13:48:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.786 13:48:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.786 13:48:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.786 13:48:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.786 13:48:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.786 13:48:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.786 13:48:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.786 13:48:06 -- paths/export.sh@5 -- # export PATH 00:19:03.786 13:48:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.786 13:48:06 -- nvmf/common.sh@47 -- # : 0 00:19:03.786 13:48:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:03.786 13:48:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:03.786 13:48:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.786 13:48:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.786 13:48:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.786 13:48:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:03.786 13:48:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:03.786 13:48:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:03.786 13:48:06 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:03.786 13:48:06 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:03.786 13:48:06 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:03.786 13:48:06 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:03.786 13:48:06 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:03.786 13:48:06 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:03.786 13:48:06 -- host/auth.sh@21 -- # keys=() 00:19:03.786 13:48:06 -- host/auth.sh@77 -- # nvmftestinit 00:19:03.786 13:48:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:03.786 13:48:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.786 13:48:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:03.786 13:48:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:03.786 13:48:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:03.786 13:48:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.786 13:48:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.786 13:48:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.786 13:48:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:03.786 13:48:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:03.786 13:48:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:03.787 13:48:06 -- common/autotest_common.sh@10 -- # set +x 00:19:05.690 13:48:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:05.690 13:48:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:05.690 13:48:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:05.690 13:48:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:05.690 13:48:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:05.690 13:48:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:05.690 13:48:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:05.690 13:48:08 -- nvmf/common.sh@295 -- # net_devs=() 00:19:05.690 13:48:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:05.690 13:48:08 -- nvmf/common.sh@296 -- # e810=() 00:19:05.690 13:48:08 -- nvmf/common.sh@296 -- # local -ga e810 00:19:05.690 13:48:08 -- nvmf/common.sh@297 -- # x722=() 00:19:05.690 13:48:08 -- nvmf/common.sh@297 -- # local -ga x722 00:19:05.690 13:48:08 -- nvmf/common.sh@298 -- # mlx=() 00:19:05.690 13:48:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:05.690 13:48:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.690 13:48:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.690 13:48:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.690 13:48:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.690 13:48:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.690 13:48:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.690 13:48:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.690 13:48:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.690 13:48:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.690 13:48:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.690 13:48:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.690 13:48:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:05.690 13:48:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:05.690 13:48:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:05.690 13:48:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:05.690 13:48:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:05.690 13:48:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:05.690 13:48:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.690 13:48:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:05.690 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:05.690 13:48:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.690 13:48:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.691 13:48:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.691 13:48:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.691 13:48:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.691 13:48:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.691 13:48:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:05.691 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:05.691 13:48:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.691 13:48:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.691 13:48:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.691 13:48:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.691 13:48:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.691 13:48:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:05.691 13:48:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:05.691 13:48:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:05.691 13:48:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.691 13:48:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.691 13:48:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:05.691 13:48:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.691 13:48:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:05.691 Found net devices under 0000:84:00.0: cvl_0_0 00:19:05.691 13:48:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.691 13:48:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.691 13:48:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.691 13:48:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:05.691 13:48:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.691 13:48:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:05.691 Found net devices under 0000:84:00.1: cvl_0_1 00:19:05.691 13:48:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.691 13:48:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:05.691 13:48:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:05.691 13:48:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:05.691 13:48:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:05.691 13:48:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:05.691 13:48:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.691 13:48:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.691 13:48:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.691 13:48:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:05.691 13:48:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.691 13:48:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.691 13:48:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:05.691 13:48:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.691 13:48:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.691 13:48:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:05.691 13:48:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:05.691 13:48:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.691 13:48:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.691 13:48:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.691 13:48:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.691 13:48:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:05.691 13:48:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.949 13:48:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.949 13:48:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.949 13:48:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:05.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:19:05.949 00:19:05.949 --- 10.0.0.2 ping statistics --- 00:19:05.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.949 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:19:05.949 13:48:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:19:05.949 00:19:05.949 --- 10.0.0.1 ping statistics --- 00:19:05.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.949 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:19:05.949 13:48:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.949 13:48:08 -- nvmf/common.sh@411 -- # return 0 00:19:05.949 13:48:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:05.949 13:48:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.949 13:48:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:05.949 13:48:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:05.949 13:48:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.949 13:48:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:05.949 13:48:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:05.949 13:48:08 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:19:05.949 13:48:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:05.949 13:48:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:05.949 13:48:08 -- common/autotest_common.sh@10 -- # set +x 00:19:05.949 13:48:08 -- nvmf/common.sh@470 -- # nvmfpid=2663638 00:19:05.949 13:48:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:05.949 13:48:08 -- nvmf/common.sh@471 -- # waitforlisten 2663638 00:19:05.949 13:48:08 -- common/autotest_common.sh@817 -- # '[' -z 2663638 ']' 00:19:05.949 13:48:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.949 13:48:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:05.949 13:48:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.949 13:48:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:05.949 13:48:08 -- common/autotest_common.sh@10 -- # set +x 00:19:06.884 13:48:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:06.884 13:48:09 -- common/autotest_common.sh@850 -- # return 0 00:19:06.884 13:48:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:06.884 13:48:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:06.884 13:48:09 -- common/autotest_common.sh@10 -- # set +x 00:19:06.884 13:48:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.884 13:48:09 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:06.884 13:48:09 -- host/auth.sh@81 -- # gen_key null 32 00:19:06.884 13:48:09 -- host/auth.sh@53 -- # local digest len file key 00:19:06.884 13:48:09 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.884 13:48:09 -- host/auth.sh@54 -- # local -A digests 00:19:06.884 13:48:09 -- host/auth.sh@56 -- # digest=null 00:19:06.884 13:48:09 -- host/auth.sh@56 -- # len=32 00:19:06.884 13:48:09 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:06.884 13:48:09 -- host/auth.sh@57 -- # key=de15ae4b11163dccca0946c194e2fdd2 00:19:06.884 13:48:09 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:19:06.884 13:48:09 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.tjU 00:19:06.884 13:48:09 -- host/auth.sh@59 -- # format_dhchap_key de15ae4b11163dccca0946c194e2fdd2 0 00:19:06.884 13:48:09 -- nvmf/common.sh@708 -- # format_key DHHC-1 de15ae4b11163dccca0946c194e2fdd2 0 00:19:06.884 13:48:09 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:06.884 13:48:09 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:06.884 13:48:09 -- nvmf/common.sh@693 -- # key=de15ae4b11163dccca0946c194e2fdd2 00:19:06.884 13:48:09 -- nvmf/common.sh@693 -- # digest=0 00:19:06.884 13:48:09 -- nvmf/common.sh@694 -- # python - 00:19:06.884 13:48:09 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.tjU 00:19:06.884 13:48:09 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.tjU 00:19:06.884 13:48:09 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.tjU 00:19:06.884 13:48:09 -- host/auth.sh@82 -- # gen_key null 48 00:19:06.884 13:48:09 -- host/auth.sh@53 -- # local digest len file key 00:19:06.884 13:48:09 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.884 13:48:09 -- host/auth.sh@54 -- # local -A digests 00:19:06.884 13:48:09 -- host/auth.sh@56 -- # digest=null 00:19:06.884 13:48:09 -- host/auth.sh@56 -- # len=48 00:19:06.884 13:48:09 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:06.884 13:48:09 -- host/auth.sh@57 -- # key=4deb0d49887a01e16822c30675ee530a3de2334a6d0e2b25 00:19:06.884 13:48:09 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:19:06.884 13:48:09 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.kP2 00:19:06.884 13:48:09 -- host/auth.sh@59 -- # format_dhchap_key 4deb0d49887a01e16822c30675ee530a3de2334a6d0e2b25 0 00:19:06.884 13:48:09 -- nvmf/common.sh@708 -- # format_key DHHC-1 4deb0d49887a01e16822c30675ee530a3de2334a6d0e2b25 0 00:19:06.884 13:48:09 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:06.884 13:48:09 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:06.884 13:48:09 -- nvmf/common.sh@693 -- # key=4deb0d49887a01e16822c30675ee530a3de2334a6d0e2b25 00:19:06.884 13:48:09 -- nvmf/common.sh@693 -- # digest=0 00:19:06.884 13:48:09 -- nvmf/common.sh@694 -- # python - 00:19:06.884 13:48:09 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.kP2 00:19:06.884 13:48:09 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.kP2 00:19:06.884 13:48:09 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.kP2 00:19:06.884 13:48:09 -- host/auth.sh@83 -- # gen_key sha256 32 00:19:06.884 13:48:09 -- host/auth.sh@53 -- # local digest len file key 00:19:06.884 13:48:09 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.884 13:48:09 -- host/auth.sh@54 -- # local -A digests 00:19:06.884 13:48:09 -- host/auth.sh@56 -- # digest=sha256 00:19:06.884 13:48:09 -- host/auth.sh@56 -- # len=32 00:19:07.143 13:48:09 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:07.143 13:48:09 -- host/auth.sh@57 -- # key=cadeffb860b26508b7fef6727bbc3eb7 00:19:07.143 13:48:09 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:19:07.143 13:48:09 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.o52 00:19:07.143 13:48:09 -- host/auth.sh@59 -- # format_dhchap_key cadeffb860b26508b7fef6727bbc3eb7 1 00:19:07.143 13:48:09 -- nvmf/common.sh@708 -- # format_key DHHC-1 cadeffb860b26508b7fef6727bbc3eb7 1 00:19:07.143 13:48:09 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:07.143 13:48:09 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:07.143 13:48:09 -- nvmf/common.sh@693 -- # key=cadeffb860b26508b7fef6727bbc3eb7 00:19:07.143 13:48:09 -- nvmf/common.sh@693 -- # digest=1 00:19:07.143 13:48:09 -- nvmf/common.sh@694 -- # python - 00:19:07.143 13:48:09 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.o52 00:19:07.143 13:48:09 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.o52 00:19:07.143 13:48:09 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.o52 00:19:07.143 13:48:09 -- host/auth.sh@84 -- # gen_key sha384 48 00:19:07.143 13:48:09 -- host/auth.sh@53 -- # local digest len file key 00:19:07.143 13:48:09 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:07.143 13:48:09 -- host/auth.sh@54 -- # local -A digests 00:19:07.143 13:48:09 -- host/auth.sh@56 -- # digest=sha384 00:19:07.143 13:48:09 -- host/auth.sh@56 -- # len=48 00:19:07.143 13:48:09 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:07.143 13:48:09 -- host/auth.sh@57 -- # key=c8ea58a1732f16f7dc20d94b943043d0e8c13729b5ab2b9a 00:19:07.143 13:48:09 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:19:07.143 13:48:09 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.EmS 00:19:07.143 13:48:09 -- host/auth.sh@59 -- # format_dhchap_key c8ea58a1732f16f7dc20d94b943043d0e8c13729b5ab2b9a 2 00:19:07.143 13:48:09 -- nvmf/common.sh@708 -- # format_key DHHC-1 c8ea58a1732f16f7dc20d94b943043d0e8c13729b5ab2b9a 2 00:19:07.143 13:48:09 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:07.143 13:48:09 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:07.143 13:48:09 -- nvmf/common.sh@693 -- # key=c8ea58a1732f16f7dc20d94b943043d0e8c13729b5ab2b9a 00:19:07.143 13:48:09 -- nvmf/common.sh@693 -- # digest=2 00:19:07.143 13:48:09 -- nvmf/common.sh@694 -- # python - 00:19:07.143 13:48:09 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.EmS 00:19:07.143 13:48:09 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.EmS 00:19:07.143 13:48:09 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.EmS 00:19:07.143 13:48:09 -- host/auth.sh@85 -- # gen_key sha512 64 00:19:07.143 13:48:09 -- host/auth.sh@53 -- # local digest len file key 00:19:07.143 13:48:09 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:07.143 13:48:09 -- host/auth.sh@54 -- # local -A digests 00:19:07.143 13:48:09 -- host/auth.sh@56 -- # digest=sha512 00:19:07.143 13:48:09 -- host/auth.sh@56 -- # len=64 00:19:07.143 13:48:09 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:07.143 13:48:09 -- host/auth.sh@57 -- # key=d326a448948dd020c55ae1ada2914c395c3cbd63105b349523e011cdbaa17e5c 00:19:07.143 13:48:09 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:19:07.143 13:48:09 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.Fkd 00:19:07.143 13:48:09 -- host/auth.sh@59 -- # format_dhchap_key d326a448948dd020c55ae1ada2914c395c3cbd63105b349523e011cdbaa17e5c 3 00:19:07.143 13:48:09 -- nvmf/common.sh@708 -- # format_key DHHC-1 d326a448948dd020c55ae1ada2914c395c3cbd63105b349523e011cdbaa17e5c 3 00:19:07.143 13:48:09 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:07.143 13:48:09 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:07.143 13:48:09 -- nvmf/common.sh@693 -- # key=d326a448948dd020c55ae1ada2914c395c3cbd63105b349523e011cdbaa17e5c 00:19:07.143 13:48:09 -- nvmf/common.sh@693 -- # digest=3 00:19:07.143 13:48:09 -- nvmf/common.sh@694 -- # python - 00:19:07.143 13:48:09 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.Fkd 00:19:07.143 13:48:09 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.Fkd 00:19:07.143 13:48:09 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.Fkd 00:19:07.143 13:48:09 -- host/auth.sh@87 -- # waitforlisten 2663638 00:19:07.143 13:48:09 -- common/autotest_common.sh@817 -- # '[' -z 2663638 ']' 00:19:07.143 13:48:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.143 13:48:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:07.143 13:48:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.143 13:48:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:07.143 13:48:09 -- common/autotest_common.sh@10 -- # set +x 00:19:07.401 13:48:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:07.401 13:48:10 -- common/autotest_common.sh@850 -- # return 0 00:19:07.401 13:48:10 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:07.401 13:48:10 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tjU 00:19:07.401 13:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.401 13:48:10 -- common/autotest_common.sh@10 -- # set +x 00:19:07.401 13:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.401 13:48:10 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:07.401 13:48:10 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.kP2 00:19:07.401 13:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.401 13:48:10 -- common/autotest_common.sh@10 -- # set +x 00:19:07.401 13:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.401 13:48:10 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:07.401 13:48:10 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.o52 00:19:07.401 13:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.401 13:48:10 -- common/autotest_common.sh@10 -- # set +x 00:19:07.401 13:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.401 13:48:10 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:07.401 13:48:10 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.EmS 00:19:07.401 13:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.401 13:48:10 -- common/autotest_common.sh@10 -- # set +x 00:19:07.401 13:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.401 13:48:10 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:07.401 13:48:10 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Fkd 00:19:07.401 13:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.401 13:48:10 -- common/autotest_common.sh@10 -- # set +x 00:19:07.401 13:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.401 13:48:10 -- host/auth.sh@92 -- # nvmet_auth_init 00:19:07.401 13:48:10 -- host/auth.sh@35 -- # get_main_ns_ip 00:19:07.401 13:48:10 -- nvmf/common.sh@717 -- # local ip 00:19:07.401 13:48:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:07.401 13:48:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:07.401 13:48:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.401 13:48:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.401 13:48:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:07.401 13:48:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.401 13:48:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:07.401 13:48:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:07.401 13:48:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:07.401 13:48:10 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:07.401 13:48:10 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:07.401 13:48:10 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:19:07.401 13:48:10 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:07.401 13:48:10 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:07.401 13:48:10 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:07.401 13:48:10 -- nvmf/common.sh@628 -- # local block nvme 00:19:07.401 13:48:10 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:19:07.401 13:48:10 -- nvmf/common.sh@631 -- # modprobe nvmet 00:19:07.401 13:48:10 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:07.401 13:48:10 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:19:08.776 Waiting for block devices as requested 00:19:08.776 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:19:08.776 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:08.776 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:08.776 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:08.776 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:08.776 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:09.033 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:09.033 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:09.033 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:09.033 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:09.290 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:09.290 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:09.290 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:09.290 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:09.548 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:09.548 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:09.548 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:10.114 13:48:12 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:19:10.114 13:48:12 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:10.114 13:48:12 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:19:10.114 13:48:12 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:19:10.114 13:48:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:10.114 13:48:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:10.114 13:48:12 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:19:10.114 13:48:12 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:10.114 13:48:12 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:10.114 No valid GPT data, bailing 00:19:10.114 13:48:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:10.114 13:48:12 -- scripts/common.sh@391 -- # pt= 00:19:10.114 13:48:12 -- scripts/common.sh@392 -- # return 1 00:19:10.114 13:48:12 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:19:10.114 13:48:12 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:19:10.114 13:48:12 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:10.114 13:48:12 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:10.114 13:48:12 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:10.114 13:48:12 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:10.114 13:48:12 -- nvmf/common.sh@656 -- # echo 1 00:19:10.114 13:48:12 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:19:10.114 13:48:12 -- nvmf/common.sh@658 -- # echo 1 00:19:10.114 13:48:12 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:19:10.114 13:48:12 -- nvmf/common.sh@661 -- # echo tcp 00:19:10.114 13:48:12 -- nvmf/common.sh@662 -- # echo 4420 00:19:10.114 13:48:12 -- nvmf/common.sh@663 -- # echo ipv4 00:19:10.114 13:48:12 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:10.114 13:48:12 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:19:10.114 00:19:10.114 Discovery Log Number of Records 2, Generation counter 2 00:19:10.114 =====Discovery Log Entry 0====== 00:19:10.114 trtype: tcp 00:19:10.114 adrfam: ipv4 00:19:10.114 subtype: current discovery subsystem 00:19:10.114 treq: not specified, sq flow control disable supported 00:19:10.114 portid: 1 00:19:10.114 trsvcid: 4420 00:19:10.114 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:10.114 traddr: 10.0.0.1 00:19:10.114 eflags: none 00:19:10.114 sectype: none 00:19:10.114 =====Discovery Log Entry 1====== 00:19:10.114 trtype: tcp 00:19:10.114 adrfam: ipv4 00:19:10.114 subtype: nvme subsystem 00:19:10.114 treq: not specified, sq flow control disable supported 00:19:10.114 portid: 1 00:19:10.114 trsvcid: 4420 00:19:10.114 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:10.114 traddr: 10.0.0.1 00:19:10.114 eflags: none 00:19:10.114 sectype: none 00:19:10.114 13:48:12 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:10.114 13:48:12 -- host/auth.sh@37 -- # echo 0 00:19:10.114 13:48:12 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:10.114 13:48:12 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:10.114 13:48:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.114 13:48:12 -- host/auth.sh@44 -- # digest=sha256 00:19:10.114 13:48:12 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:10.114 13:48:12 -- host/auth.sh@44 -- # keyid=1 00:19:10.114 13:48:12 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:10.114 13:48:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:10.114 13:48:12 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:10.114 13:48:12 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:10.114 13:48:12 -- host/auth.sh@100 -- # IFS=, 00:19:10.114 13:48:12 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:19:10.114 13:48:12 -- host/auth.sh@100 -- # IFS=, 00:19:10.114 13:48:12 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:10.114 13:48:12 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:10.114 13:48:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.114 13:48:12 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:19:10.114 13:48:12 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:10.114 13:48:12 -- host/auth.sh@68 -- # keyid=1 00:19:10.114 13:48:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:10.114 13:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.114 13:48:12 -- common/autotest_common.sh@10 -- # set +x 00:19:10.114 13:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.114 13:48:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.114 13:48:12 -- nvmf/common.sh@717 -- # local ip 00:19:10.114 13:48:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.114 13:48:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.114 13:48:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.114 13:48:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.114 13:48:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:10.114 13:48:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.114 13:48:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:10.114 13:48:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:10.114 13:48:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:10.114 13:48:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:10.114 13:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.114 13:48:12 -- common/autotest_common.sh@10 -- # set +x 00:19:10.114 nvme0n1 00:19:10.114 13:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.114 13:48:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.114 13:48:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:10.114 13:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.114 13:48:12 -- common/autotest_common.sh@10 -- # set +x 00:19:10.114 13:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.372 13:48:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.372 13:48:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.372 13:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.372 13:48:12 -- common/autotest_common.sh@10 -- # set +x 00:19:10.372 13:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.372 13:48:12 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:10.372 13:48:12 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.372 13:48:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:10.372 13:48:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:10.372 13:48:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.372 13:48:12 -- host/auth.sh@44 -- # digest=sha256 00:19:10.372 13:48:12 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:10.372 13:48:12 -- host/auth.sh@44 -- # keyid=0 00:19:10.372 13:48:12 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:10.372 13:48:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:10.372 13:48:12 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:10.372 13:48:12 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:10.372 13:48:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:19:10.372 13:48:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.373 13:48:12 -- host/auth.sh@68 -- # digest=sha256 00:19:10.373 13:48:12 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:10.373 13:48:12 -- host/auth.sh@68 -- # keyid=0 00:19:10.373 13:48:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.373 13:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.373 13:48:12 -- common/autotest_common.sh@10 -- # set +x 00:19:10.373 13:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.373 13:48:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.373 13:48:12 -- nvmf/common.sh@717 -- # local ip 00:19:10.373 13:48:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.373 13:48:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.373 13:48:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.373 13:48:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.373 13:48:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:10.373 13:48:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.373 13:48:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:10.373 13:48:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:10.373 13:48:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:10.373 13:48:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:10.373 13:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.373 13:48:12 -- common/autotest_common.sh@10 -- # set +x 00:19:10.373 nvme0n1 00:19:10.373 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.373 13:48:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.373 13:48:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:10.373 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.373 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.373 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.373 13:48:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.373 13:48:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.373 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.373 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.373 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.373 13:48:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:10.373 13:48:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:10.373 13:48:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.373 13:48:13 -- host/auth.sh@44 -- # digest=sha256 00:19:10.373 13:48:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:10.373 13:48:13 -- host/auth.sh@44 -- # keyid=1 00:19:10.373 13:48:13 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:10.373 13:48:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:10.373 13:48:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:10.373 13:48:13 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:10.373 13:48:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:19:10.373 13:48:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.373 13:48:13 -- host/auth.sh@68 -- # digest=sha256 00:19:10.373 13:48:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:10.373 13:48:13 -- host/auth.sh@68 -- # keyid=1 00:19:10.373 13:48:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.373 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.373 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.373 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.373 13:48:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.373 13:48:13 -- nvmf/common.sh@717 -- # local ip 00:19:10.373 13:48:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.373 13:48:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.373 13:48:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.373 13:48:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.373 13:48:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:10.373 13:48:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.373 13:48:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:10.373 13:48:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:10.373 13:48:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:10.373 13:48:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:10.373 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.373 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.631 nvme0n1 00:19:10.631 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.631 13:48:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.631 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.631 13:48:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:10.631 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.631 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.631 13:48:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.631 13:48:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.631 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.631 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.631 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.631 13:48:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:10.631 13:48:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:10.631 13:48:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.631 13:48:13 -- host/auth.sh@44 -- # digest=sha256 00:19:10.631 13:48:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:10.631 13:48:13 -- host/auth.sh@44 -- # keyid=2 00:19:10.631 13:48:13 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:10.631 13:48:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:10.631 13:48:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:10.631 13:48:13 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:10.631 13:48:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:19:10.632 13:48:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.632 13:48:13 -- host/auth.sh@68 -- # digest=sha256 00:19:10.632 13:48:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:10.632 13:48:13 -- host/auth.sh@68 -- # keyid=2 00:19:10.632 13:48:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.632 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.632 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.632 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.632 13:48:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.632 13:48:13 -- nvmf/common.sh@717 -- # local ip 00:19:10.632 13:48:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.632 13:48:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.632 13:48:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.632 13:48:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.632 13:48:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:10.632 13:48:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.632 13:48:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:10.632 13:48:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:10.632 13:48:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:10.632 13:48:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:10.632 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.632 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.890 nvme0n1 00:19:10.890 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.890 13:48:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.890 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.890 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.890 13:48:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:10.890 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.890 13:48:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.890 13:48:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.890 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.890 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.890 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.890 13:48:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:10.890 13:48:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:10.890 13:48:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.890 13:48:13 -- host/auth.sh@44 -- # digest=sha256 00:19:10.890 13:48:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:10.890 13:48:13 -- host/auth.sh@44 -- # keyid=3 00:19:10.890 13:48:13 -- host/auth.sh@45 -- # key=DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:10.890 13:48:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:10.890 13:48:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:10.890 13:48:13 -- host/auth.sh@49 -- # echo DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:10.890 13:48:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:19:10.890 13:48:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.890 13:48:13 -- host/auth.sh@68 -- # digest=sha256 00:19:10.890 13:48:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:10.890 13:48:13 -- host/auth.sh@68 -- # keyid=3 00:19:10.890 13:48:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.890 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.890 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.890 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.890 13:48:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.890 13:48:13 -- nvmf/common.sh@717 -- # local ip 00:19:10.890 13:48:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.890 13:48:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.890 13:48:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.890 13:48:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.890 13:48:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:10.890 13:48:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.890 13:48:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:10.890 13:48:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:10.890 13:48:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:10.890 13:48:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:10.890 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.890 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.890 nvme0n1 00:19:10.890 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.890 13:48:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.890 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.890 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.890 13:48:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:10.890 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.149 13:48:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.149 13:48:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.149 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.149 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:11.149 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.149 13:48:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:11.149 13:48:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:11.149 13:48:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:11.149 13:48:13 -- host/auth.sh@44 -- # digest=sha256 00:19:11.149 13:48:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:11.149 13:48:13 -- host/auth.sh@44 -- # keyid=4 00:19:11.149 13:48:13 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:11.149 13:48:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:11.149 13:48:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:11.149 13:48:13 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:11.149 13:48:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:19:11.149 13:48:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:11.149 13:48:13 -- host/auth.sh@68 -- # digest=sha256 00:19:11.150 13:48:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:11.150 13:48:13 -- host/auth.sh@68 -- # keyid=4 00:19:11.150 13:48:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:11.150 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.150 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:11.150 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.150 13:48:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:11.150 13:48:13 -- nvmf/common.sh@717 -- # local ip 00:19:11.150 13:48:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:11.150 13:48:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:11.150 13:48:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.150 13:48:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.150 13:48:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:11.150 13:48:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.150 13:48:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:11.150 13:48:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:11.150 13:48:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:11.150 13:48:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:11.150 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.150 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:11.150 nvme0n1 00:19:11.150 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.150 13:48:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.150 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.150 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:11.150 13:48:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:11.150 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.150 13:48:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.150 13:48:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.150 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.150 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:11.150 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.150 13:48:13 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.150 13:48:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:11.150 13:48:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:11.150 13:48:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:11.150 13:48:13 -- host/auth.sh@44 -- # digest=sha256 00:19:11.150 13:48:13 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:11.150 13:48:13 -- host/auth.sh@44 -- # keyid=0 00:19:11.150 13:48:13 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:11.150 13:48:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:11.150 13:48:13 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:11.150 13:48:13 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:11.150 13:48:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:19:11.150 13:48:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:11.150 13:48:13 -- host/auth.sh@68 -- # digest=sha256 00:19:11.150 13:48:13 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:11.150 13:48:13 -- host/auth.sh@68 -- # keyid=0 00:19:11.150 13:48:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:11.150 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.150 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:11.150 13:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.150 13:48:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:11.150 13:48:13 -- nvmf/common.sh@717 -- # local ip 00:19:11.150 13:48:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:11.150 13:48:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:11.150 13:48:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.150 13:48:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.150 13:48:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:11.150 13:48:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.150 13:48:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:11.150 13:48:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:11.150 13:48:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:11.150 13:48:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:11.150 13:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.150 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:19:11.409 nvme0n1 00:19:11.409 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.409 13:48:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.409 13:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.409 13:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:11.409 13:48:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:11.409 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.409 13:48:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.409 13:48:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.409 13:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.409 13:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:11.409 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.409 13:48:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:11.409 13:48:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:11.409 13:48:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:11.409 13:48:14 -- host/auth.sh@44 -- # digest=sha256 00:19:11.409 13:48:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:11.409 13:48:14 -- host/auth.sh@44 -- # keyid=1 00:19:11.409 13:48:14 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:11.409 13:48:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:11.409 13:48:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:11.409 13:48:14 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:11.409 13:48:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:19:11.410 13:48:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:11.410 13:48:14 -- host/auth.sh@68 -- # digest=sha256 00:19:11.410 13:48:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:11.410 13:48:14 -- host/auth.sh@68 -- # keyid=1 00:19:11.410 13:48:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:11.410 13:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.410 13:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:11.410 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.410 13:48:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:11.410 13:48:14 -- nvmf/common.sh@717 -- # local ip 00:19:11.410 13:48:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:11.410 13:48:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:11.410 13:48:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.410 13:48:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.410 13:48:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:11.410 13:48:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.410 13:48:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:11.410 13:48:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:11.410 13:48:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:11.410 13:48:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:11.410 13:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.410 13:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:11.669 nvme0n1 00:19:11.669 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.669 13:48:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.669 13:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.669 13:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:11.669 13:48:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:11.669 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.669 13:48:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.669 13:48:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.669 13:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.669 13:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:11.669 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.669 13:48:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:11.669 13:48:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:11.669 13:48:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:11.669 13:48:14 -- host/auth.sh@44 -- # digest=sha256 00:19:11.669 13:48:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:11.669 13:48:14 -- host/auth.sh@44 -- # keyid=2 00:19:11.669 13:48:14 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:11.669 13:48:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:11.669 13:48:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:11.669 13:48:14 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:11.669 13:48:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:19:11.669 13:48:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:11.669 13:48:14 -- host/auth.sh@68 -- # digest=sha256 00:19:11.669 13:48:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:11.669 13:48:14 -- host/auth.sh@68 -- # keyid=2 00:19:11.669 13:48:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:11.669 13:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.669 13:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:11.669 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.669 13:48:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:11.669 13:48:14 -- nvmf/common.sh@717 -- # local ip 00:19:11.669 13:48:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:11.669 13:48:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:11.669 13:48:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.669 13:48:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.669 13:48:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:11.669 13:48:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.669 13:48:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:11.669 13:48:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:11.669 13:48:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:11.669 13:48:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:11.669 13:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.669 13:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:11.928 nvme0n1 00:19:11.928 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.928 13:48:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.928 13:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.928 13:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:11.928 13:48:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:11.928 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.928 13:48:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.928 13:48:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.928 13:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.928 13:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:11.928 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.928 13:48:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:11.928 13:48:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:11.928 13:48:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:11.928 13:48:14 -- host/auth.sh@44 -- # digest=sha256 00:19:11.928 13:48:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:11.928 13:48:14 -- host/auth.sh@44 -- # keyid=3 00:19:11.928 13:48:14 -- host/auth.sh@45 -- # key=DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:11.928 13:48:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:11.928 13:48:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:11.928 13:48:14 -- host/auth.sh@49 -- # echo DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:11.928 13:48:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:19:11.928 13:48:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:11.928 13:48:14 -- host/auth.sh@68 -- # digest=sha256 00:19:11.928 13:48:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:11.929 13:48:14 -- host/auth.sh@68 -- # keyid=3 00:19:11.929 13:48:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:11.929 13:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.929 13:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:11.929 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.929 13:48:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:11.929 13:48:14 -- nvmf/common.sh@717 -- # local ip 00:19:11.929 13:48:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:11.929 13:48:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:11.929 13:48:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.929 13:48:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.929 13:48:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:11.929 13:48:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.929 13:48:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:11.929 13:48:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:11.929 13:48:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:11.929 13:48:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:11.929 13:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.929 13:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:12.187 nvme0n1 00:19:12.187 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.187 13:48:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.187 13:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.187 13:48:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:12.187 13:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:12.187 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.187 13:48:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.187 13:48:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.187 13:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.187 13:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:12.187 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.187 13:48:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:12.187 13:48:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:12.187 13:48:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:12.187 13:48:14 -- host/auth.sh@44 -- # digest=sha256 00:19:12.187 13:48:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:12.187 13:48:14 -- host/auth.sh@44 -- # keyid=4 00:19:12.187 13:48:14 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:12.187 13:48:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:12.187 13:48:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:12.187 13:48:14 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:12.187 13:48:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:19:12.187 13:48:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:12.187 13:48:14 -- host/auth.sh@68 -- # digest=sha256 00:19:12.187 13:48:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:12.187 13:48:14 -- host/auth.sh@68 -- # keyid=4 00:19:12.187 13:48:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:12.187 13:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.187 13:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:12.187 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.187 13:48:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:12.187 13:48:14 -- nvmf/common.sh@717 -- # local ip 00:19:12.187 13:48:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:12.187 13:48:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:12.187 13:48:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.187 13:48:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.187 13:48:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:12.187 13:48:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.187 13:48:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:12.187 13:48:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:12.187 13:48:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:12.187 13:48:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:12.187 13:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.187 13:48:14 -- common/autotest_common.sh@10 -- # set +x 00:19:12.445 nvme0n1 00:19:12.445 13:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.445 13:48:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.445 13:48:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.445 13:48:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:12.445 13:48:15 -- common/autotest_common.sh@10 -- # set +x 00:19:12.445 13:48:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.445 13:48:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.445 13:48:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.445 13:48:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.445 13:48:15 -- common/autotest_common.sh@10 -- # set +x 00:19:12.445 13:48:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.445 13:48:15 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.445 13:48:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:12.445 13:48:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:12.445 13:48:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:12.445 13:48:15 -- host/auth.sh@44 -- # digest=sha256 00:19:12.445 13:48:15 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:12.445 13:48:15 -- host/auth.sh@44 -- # keyid=0 00:19:12.445 13:48:15 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:12.445 13:48:15 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:12.445 13:48:15 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:12.445 13:48:15 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:12.445 13:48:15 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:19:12.445 13:48:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:12.445 13:48:15 -- host/auth.sh@68 -- # digest=sha256 00:19:12.445 13:48:15 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:12.445 13:48:15 -- host/auth.sh@68 -- # keyid=0 00:19:12.445 13:48:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:12.445 13:48:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.445 13:48:15 -- common/autotest_common.sh@10 -- # set +x 00:19:12.445 13:48:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.445 13:48:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:12.445 13:48:15 -- nvmf/common.sh@717 -- # local ip 00:19:12.445 13:48:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:12.445 13:48:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:12.445 13:48:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.445 13:48:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.445 13:48:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:12.445 13:48:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.445 13:48:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:12.445 13:48:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:12.445 13:48:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:12.445 13:48:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:12.446 13:48:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.446 13:48:15 -- common/autotest_common.sh@10 -- # set +x 00:19:12.703 nvme0n1 00:19:12.703 13:48:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.703 13:48:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.703 13:48:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.703 13:48:15 -- common/autotest_common.sh@10 -- # set +x 00:19:12.703 13:48:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:12.703 13:48:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.703 13:48:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.703 13:48:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.703 13:48:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.703 13:48:15 -- common/autotest_common.sh@10 -- # set +x 00:19:12.703 13:48:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.703 13:48:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:12.703 13:48:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:12.703 13:48:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:12.703 13:48:15 -- host/auth.sh@44 -- # digest=sha256 00:19:12.703 13:48:15 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:12.703 13:48:15 -- host/auth.sh@44 -- # keyid=1 00:19:12.703 13:48:15 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:12.703 13:48:15 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:12.703 13:48:15 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:12.703 13:48:15 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:12.703 13:48:15 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:19:12.703 13:48:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:12.703 13:48:15 -- host/auth.sh@68 -- # digest=sha256 00:19:12.703 13:48:15 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:12.703 13:48:15 -- host/auth.sh@68 -- # keyid=1 00:19:12.703 13:48:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:12.703 13:48:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.703 13:48:15 -- common/autotest_common.sh@10 -- # set +x 00:19:12.703 13:48:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.703 13:48:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:12.703 13:48:15 -- nvmf/common.sh@717 -- # local ip 00:19:12.703 13:48:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:12.703 13:48:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:12.703 13:48:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.703 13:48:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.703 13:48:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:12.703 13:48:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.703 13:48:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:12.703 13:48:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:12.703 13:48:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:12.703 13:48:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:12.703 13:48:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.703 13:48:15 -- common/autotest_common.sh@10 -- # set +x 00:19:13.271 nvme0n1 00:19:13.271 13:48:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.271 13:48:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.271 13:48:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.271 13:48:15 -- common/autotest_common.sh@10 -- # set +x 00:19:13.271 13:48:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:13.271 13:48:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.271 13:48:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.271 13:48:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.271 13:48:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.271 13:48:15 -- common/autotest_common.sh@10 -- # set +x 00:19:13.271 13:48:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.271 13:48:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:13.271 13:48:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:13.271 13:48:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:13.271 13:48:15 -- host/auth.sh@44 -- # digest=sha256 00:19:13.271 13:48:15 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:13.271 13:48:15 -- host/auth.sh@44 -- # keyid=2 00:19:13.271 13:48:15 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:13.271 13:48:15 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:13.271 13:48:15 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:13.271 13:48:15 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:13.271 13:48:15 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:19:13.271 13:48:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:13.271 13:48:15 -- host/auth.sh@68 -- # digest=sha256 00:19:13.271 13:48:15 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:13.271 13:48:15 -- host/auth.sh@68 -- # keyid=2 00:19:13.271 13:48:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:13.271 13:48:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.271 13:48:15 -- common/autotest_common.sh@10 -- # set +x 00:19:13.271 13:48:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.271 13:48:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:13.271 13:48:15 -- nvmf/common.sh@717 -- # local ip 00:19:13.271 13:48:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:13.271 13:48:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:13.271 13:48:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.271 13:48:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.271 13:48:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:13.271 13:48:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.271 13:48:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:13.271 13:48:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:13.271 13:48:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:13.271 13:48:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:13.271 13:48:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.271 13:48:15 -- common/autotest_common.sh@10 -- # set +x 00:19:13.560 nvme0n1 00:19:13.561 13:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.561 13:48:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.561 13:48:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:13.561 13:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.561 13:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:13.561 13:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.561 13:48:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.561 13:48:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.561 13:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.561 13:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:13.561 13:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.561 13:48:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:13.561 13:48:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:13.561 13:48:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:13.561 13:48:16 -- host/auth.sh@44 -- # digest=sha256 00:19:13.561 13:48:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:13.561 13:48:16 -- host/auth.sh@44 -- # keyid=3 00:19:13.561 13:48:16 -- host/auth.sh@45 -- # key=DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:13.561 13:48:16 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:13.561 13:48:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:13.561 13:48:16 -- host/auth.sh@49 -- # echo DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:13.561 13:48:16 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:19:13.561 13:48:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:13.561 13:48:16 -- host/auth.sh@68 -- # digest=sha256 00:19:13.561 13:48:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:13.561 13:48:16 -- host/auth.sh@68 -- # keyid=3 00:19:13.561 13:48:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:13.561 13:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.561 13:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:13.561 13:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.561 13:48:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:13.561 13:48:16 -- nvmf/common.sh@717 -- # local ip 00:19:13.561 13:48:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:13.561 13:48:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:13.561 13:48:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.561 13:48:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.561 13:48:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:13.561 13:48:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.561 13:48:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:13.561 13:48:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:13.561 13:48:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:13.561 13:48:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:13.561 13:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.561 13:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:13.819 nvme0n1 00:19:13.819 13:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.819 13:48:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.819 13:48:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:13.819 13:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.819 13:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:13.819 13:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.819 13:48:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.819 13:48:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.819 13:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.819 13:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:13.819 13:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.819 13:48:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:13.819 13:48:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:13.819 13:48:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:13.819 13:48:16 -- host/auth.sh@44 -- # digest=sha256 00:19:13.819 13:48:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:13.819 13:48:16 -- host/auth.sh@44 -- # keyid=4 00:19:13.819 13:48:16 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:13.819 13:48:16 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:13.819 13:48:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:13.819 13:48:16 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:13.819 13:48:16 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:19:13.819 13:48:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:13.819 13:48:16 -- host/auth.sh@68 -- # digest=sha256 00:19:13.819 13:48:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:13.819 13:48:16 -- host/auth.sh@68 -- # keyid=4 00:19:13.819 13:48:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:13.819 13:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.819 13:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:13.819 13:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.819 13:48:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:13.819 13:48:16 -- nvmf/common.sh@717 -- # local ip 00:19:13.819 13:48:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:13.819 13:48:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:13.819 13:48:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.819 13:48:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.819 13:48:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:13.819 13:48:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.819 13:48:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:13.819 13:48:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:13.819 13:48:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:13.819 13:48:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:13.819 13:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.819 13:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:14.077 nvme0n1 00:19:14.077 13:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.077 13:48:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.077 13:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.077 13:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:14.077 13:48:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:14.077 13:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.335 13:48:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.335 13:48:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.335 13:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.335 13:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:14.335 13:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.335 13:48:16 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.335 13:48:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:14.335 13:48:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:14.335 13:48:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:14.335 13:48:16 -- host/auth.sh@44 -- # digest=sha256 00:19:14.335 13:48:16 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:14.335 13:48:16 -- host/auth.sh@44 -- # keyid=0 00:19:14.335 13:48:16 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:14.335 13:48:16 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:14.335 13:48:16 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:14.335 13:48:16 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:14.335 13:48:16 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:19:14.335 13:48:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:14.335 13:48:16 -- host/auth.sh@68 -- # digest=sha256 00:19:14.335 13:48:16 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:14.335 13:48:16 -- host/auth.sh@68 -- # keyid=0 00:19:14.335 13:48:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.335 13:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.335 13:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:14.335 13:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.335 13:48:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:14.335 13:48:16 -- nvmf/common.sh@717 -- # local ip 00:19:14.335 13:48:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:14.335 13:48:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:14.335 13:48:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.335 13:48:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.335 13:48:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:14.335 13:48:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.335 13:48:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:14.335 13:48:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:14.335 13:48:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:14.335 13:48:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:14.335 13:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.335 13:48:16 -- common/autotest_common.sh@10 -- # set +x 00:19:14.900 nvme0n1 00:19:14.900 13:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.900 13:48:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.900 13:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.900 13:48:17 -- common/autotest_common.sh@10 -- # set +x 00:19:14.900 13:48:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:14.900 13:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.900 13:48:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.900 13:48:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.900 13:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.900 13:48:17 -- common/autotest_common.sh@10 -- # set +x 00:19:14.900 13:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.900 13:48:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:14.900 13:48:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:14.900 13:48:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:14.900 13:48:17 -- host/auth.sh@44 -- # digest=sha256 00:19:14.900 13:48:17 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:14.900 13:48:17 -- host/auth.sh@44 -- # keyid=1 00:19:14.900 13:48:17 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:14.900 13:48:17 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:14.900 13:48:17 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:14.900 13:48:17 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:14.900 13:48:17 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:19:14.900 13:48:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:14.900 13:48:17 -- host/auth.sh@68 -- # digest=sha256 00:19:14.900 13:48:17 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:14.900 13:48:17 -- host/auth.sh@68 -- # keyid=1 00:19:14.900 13:48:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.900 13:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.900 13:48:17 -- common/autotest_common.sh@10 -- # set +x 00:19:14.900 13:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.900 13:48:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:14.900 13:48:17 -- nvmf/common.sh@717 -- # local ip 00:19:14.900 13:48:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:14.900 13:48:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:14.900 13:48:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.900 13:48:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.900 13:48:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:14.900 13:48:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.900 13:48:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:14.900 13:48:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:14.900 13:48:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:14.900 13:48:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:14.900 13:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.900 13:48:17 -- common/autotest_common.sh@10 -- # set +x 00:19:15.466 nvme0n1 00:19:15.466 13:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.466 13:48:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.466 13:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.466 13:48:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:15.466 13:48:18 -- common/autotest_common.sh@10 -- # set +x 00:19:15.466 13:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.466 13:48:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.466 13:48:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.466 13:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.466 13:48:18 -- common/autotest_common.sh@10 -- # set +x 00:19:15.466 13:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.466 13:48:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:15.466 13:48:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:15.466 13:48:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:15.466 13:48:18 -- host/auth.sh@44 -- # digest=sha256 00:19:15.466 13:48:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:15.466 13:48:18 -- host/auth.sh@44 -- # keyid=2 00:19:15.466 13:48:18 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:15.466 13:48:18 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:15.466 13:48:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:15.466 13:48:18 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:15.466 13:48:18 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:19:15.466 13:48:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:15.466 13:48:18 -- host/auth.sh@68 -- # digest=sha256 00:19:15.466 13:48:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:15.466 13:48:18 -- host/auth.sh@68 -- # keyid=2 00:19:15.466 13:48:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.466 13:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.466 13:48:18 -- common/autotest_common.sh@10 -- # set +x 00:19:15.466 13:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.466 13:48:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:15.466 13:48:18 -- nvmf/common.sh@717 -- # local ip 00:19:15.466 13:48:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:15.466 13:48:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:15.466 13:48:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.466 13:48:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.466 13:48:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:15.466 13:48:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.466 13:48:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:15.466 13:48:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:15.466 13:48:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:15.466 13:48:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:15.466 13:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.466 13:48:18 -- common/autotest_common.sh@10 -- # set +x 00:19:16.032 nvme0n1 00:19:16.032 13:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.032 13:48:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.032 13:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.032 13:48:18 -- common/autotest_common.sh@10 -- # set +x 00:19:16.032 13:48:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:16.032 13:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.032 13:48:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.032 13:48:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.032 13:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.032 13:48:18 -- common/autotest_common.sh@10 -- # set +x 00:19:16.032 13:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.032 13:48:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:16.032 13:48:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:16.032 13:48:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:16.032 13:48:18 -- host/auth.sh@44 -- # digest=sha256 00:19:16.032 13:48:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:16.032 13:48:18 -- host/auth.sh@44 -- # keyid=3 00:19:16.032 13:48:18 -- host/auth.sh@45 -- # key=DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:16.032 13:48:18 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:16.032 13:48:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:16.032 13:48:18 -- host/auth.sh@49 -- # echo DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:16.032 13:48:18 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:19:16.032 13:48:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:16.032 13:48:18 -- host/auth.sh@68 -- # digest=sha256 00:19:16.032 13:48:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:16.032 13:48:18 -- host/auth.sh@68 -- # keyid=3 00:19:16.032 13:48:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:16.032 13:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.032 13:48:18 -- common/autotest_common.sh@10 -- # set +x 00:19:16.032 13:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.032 13:48:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:16.032 13:48:18 -- nvmf/common.sh@717 -- # local ip 00:19:16.032 13:48:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:16.032 13:48:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:16.032 13:48:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.032 13:48:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.032 13:48:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:16.032 13:48:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.032 13:48:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:16.032 13:48:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:16.032 13:48:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:16.032 13:48:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:16.032 13:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.032 13:48:18 -- common/autotest_common.sh@10 -- # set +x 00:19:16.597 nvme0n1 00:19:16.597 13:48:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.597 13:48:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.597 13:48:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.597 13:48:19 -- common/autotest_common.sh@10 -- # set +x 00:19:16.597 13:48:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:16.597 13:48:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.597 13:48:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.597 13:48:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.597 13:48:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.597 13:48:19 -- common/autotest_common.sh@10 -- # set +x 00:19:16.597 13:48:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.597 13:48:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:16.597 13:48:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:16.597 13:48:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:16.597 13:48:19 -- host/auth.sh@44 -- # digest=sha256 00:19:16.597 13:48:19 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:16.597 13:48:19 -- host/auth.sh@44 -- # keyid=4 00:19:16.597 13:48:19 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:16.597 13:48:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:16.597 13:48:19 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:16.597 13:48:19 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:16.597 13:48:19 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:19:16.597 13:48:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:16.597 13:48:19 -- host/auth.sh@68 -- # digest=sha256 00:19:16.597 13:48:19 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:16.597 13:48:19 -- host/auth.sh@68 -- # keyid=4 00:19:16.597 13:48:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:16.597 13:48:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.597 13:48:19 -- common/autotest_common.sh@10 -- # set +x 00:19:16.597 13:48:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.597 13:48:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:16.597 13:48:19 -- nvmf/common.sh@717 -- # local ip 00:19:16.597 13:48:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:16.597 13:48:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:16.597 13:48:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.597 13:48:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.597 13:48:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:16.597 13:48:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.597 13:48:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:16.597 13:48:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:16.597 13:48:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:16.597 13:48:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:16.597 13:48:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.597 13:48:19 -- common/autotest_common.sh@10 -- # set +x 00:19:17.162 nvme0n1 00:19:17.162 13:48:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.162 13:48:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.162 13:48:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.162 13:48:19 -- common/autotest_common.sh@10 -- # set +x 00:19:17.162 13:48:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:17.162 13:48:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.162 13:48:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.162 13:48:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.162 13:48:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.162 13:48:19 -- common/autotest_common.sh@10 -- # set +x 00:19:17.162 13:48:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.420 13:48:19 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.420 13:48:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:17.420 13:48:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:17.420 13:48:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:17.420 13:48:19 -- host/auth.sh@44 -- # digest=sha256 00:19:17.420 13:48:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:17.420 13:48:19 -- host/auth.sh@44 -- # keyid=0 00:19:17.420 13:48:19 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:17.420 13:48:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:17.420 13:48:19 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:17.420 13:48:19 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:17.420 13:48:19 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:19:17.420 13:48:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:17.420 13:48:19 -- host/auth.sh@68 -- # digest=sha256 00:19:17.420 13:48:19 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:17.420 13:48:19 -- host/auth.sh@68 -- # keyid=0 00:19:17.420 13:48:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:17.420 13:48:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.420 13:48:19 -- common/autotest_common.sh@10 -- # set +x 00:19:17.420 13:48:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.420 13:48:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:17.420 13:48:19 -- nvmf/common.sh@717 -- # local ip 00:19:17.420 13:48:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:17.420 13:48:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:17.420 13:48:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.420 13:48:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.420 13:48:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:17.420 13:48:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.420 13:48:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:17.420 13:48:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:17.420 13:48:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:17.420 13:48:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:17.420 13:48:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.420 13:48:19 -- common/autotest_common.sh@10 -- # set +x 00:19:18.354 nvme0n1 00:19:18.354 13:48:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.354 13:48:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.354 13:48:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.354 13:48:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.354 13:48:20 -- common/autotest_common.sh@10 -- # set +x 00:19:18.354 13:48:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.354 13:48:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.354 13:48:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.354 13:48:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.354 13:48:20 -- common/autotest_common.sh@10 -- # set +x 00:19:18.354 13:48:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.354 13:48:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.354 13:48:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:18.354 13:48:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.354 13:48:21 -- host/auth.sh@44 -- # digest=sha256 00:19:18.354 13:48:21 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:18.354 13:48:21 -- host/auth.sh@44 -- # keyid=1 00:19:18.354 13:48:21 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:18.354 13:48:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:18.354 13:48:21 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:18.354 13:48:21 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:18.354 13:48:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:19:18.354 13:48:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.354 13:48:21 -- host/auth.sh@68 -- # digest=sha256 00:19:18.354 13:48:21 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:18.354 13:48:21 -- host/auth.sh@68 -- # keyid=1 00:19:18.354 13:48:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:18.354 13:48:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.354 13:48:21 -- common/autotest_common.sh@10 -- # set +x 00:19:18.354 13:48:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.354 13:48:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.354 13:48:21 -- nvmf/common.sh@717 -- # local ip 00:19:18.354 13:48:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.354 13:48:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.354 13:48:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.354 13:48:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.354 13:48:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:18.354 13:48:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.354 13:48:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:18.354 13:48:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:18.354 13:48:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:18.354 13:48:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:18.354 13:48:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.354 13:48:21 -- common/autotest_common.sh@10 -- # set +x 00:19:19.288 nvme0n1 00:19:19.288 13:48:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.288 13:48:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.288 13:48:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:19.288 13:48:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.288 13:48:21 -- common/autotest_common.sh@10 -- # set +x 00:19:19.288 13:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.288 13:48:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.288 13:48:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.288 13:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.288 13:48:22 -- common/autotest_common.sh@10 -- # set +x 00:19:19.288 13:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.288 13:48:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:19.288 13:48:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:19.288 13:48:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:19.288 13:48:22 -- host/auth.sh@44 -- # digest=sha256 00:19:19.288 13:48:22 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:19.288 13:48:22 -- host/auth.sh@44 -- # keyid=2 00:19:19.288 13:48:22 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:19.288 13:48:22 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:19.288 13:48:22 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:19.288 13:48:22 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:19.288 13:48:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:19:19.288 13:48:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:19.288 13:48:22 -- host/auth.sh@68 -- # digest=sha256 00:19:19.288 13:48:22 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:19.289 13:48:22 -- host/auth.sh@68 -- # keyid=2 00:19:19.289 13:48:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:19.289 13:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.289 13:48:22 -- common/autotest_common.sh@10 -- # set +x 00:19:19.289 13:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.289 13:48:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:19.289 13:48:22 -- nvmf/common.sh@717 -- # local ip 00:19:19.289 13:48:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:19.289 13:48:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:19.289 13:48:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.289 13:48:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.289 13:48:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:19.289 13:48:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.289 13:48:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:19.289 13:48:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:19.289 13:48:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:19.289 13:48:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:19.289 13:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.289 13:48:22 -- common/autotest_common.sh@10 -- # set +x 00:19:20.666 nvme0n1 00:19:20.666 13:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.666 13:48:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.666 13:48:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:20.666 13:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.666 13:48:23 -- common/autotest_common.sh@10 -- # set +x 00:19:20.666 13:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.666 13:48:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.666 13:48:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.666 13:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.666 13:48:23 -- common/autotest_common.sh@10 -- # set +x 00:19:20.666 13:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.666 13:48:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:20.666 13:48:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:20.667 13:48:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:20.667 13:48:23 -- host/auth.sh@44 -- # digest=sha256 00:19:20.667 13:48:23 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:20.667 13:48:23 -- host/auth.sh@44 -- # keyid=3 00:19:20.667 13:48:23 -- host/auth.sh@45 -- # key=DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:20.667 13:48:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:20.667 13:48:23 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:20.667 13:48:23 -- host/auth.sh@49 -- # echo DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:20.667 13:48:23 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:19:20.667 13:48:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:20.667 13:48:23 -- host/auth.sh@68 -- # digest=sha256 00:19:20.667 13:48:23 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:20.667 13:48:23 -- host/auth.sh@68 -- # keyid=3 00:19:20.667 13:48:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.667 13:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.667 13:48:23 -- common/autotest_common.sh@10 -- # set +x 00:19:20.667 13:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.667 13:48:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:20.667 13:48:23 -- nvmf/common.sh@717 -- # local ip 00:19:20.667 13:48:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:20.667 13:48:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:20.667 13:48:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.667 13:48:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.667 13:48:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:20.667 13:48:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.667 13:48:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:20.667 13:48:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:20.667 13:48:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:20.667 13:48:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:20.667 13:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.667 13:48:23 -- common/autotest_common.sh@10 -- # set +x 00:19:21.606 nvme0n1 00:19:21.606 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.606 13:48:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.606 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.606 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:21.606 13:48:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:21.606 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.606 13:48:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.606 13:48:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.606 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.606 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:21.606 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.606 13:48:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:21.606 13:48:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:21.606 13:48:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:21.606 13:48:24 -- host/auth.sh@44 -- # digest=sha256 00:19:21.606 13:48:24 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:21.606 13:48:24 -- host/auth.sh@44 -- # keyid=4 00:19:21.606 13:48:24 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:21.606 13:48:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:21.606 13:48:24 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:21.606 13:48:24 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:21.606 13:48:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:19:21.606 13:48:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:21.606 13:48:24 -- host/auth.sh@68 -- # digest=sha256 00:19:21.606 13:48:24 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:21.606 13:48:24 -- host/auth.sh@68 -- # keyid=4 00:19:21.606 13:48:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:21.606 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.606 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:21.606 13:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.606 13:48:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:21.606 13:48:24 -- nvmf/common.sh@717 -- # local ip 00:19:21.606 13:48:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:21.606 13:48:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:21.606 13:48:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.606 13:48:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.606 13:48:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:21.606 13:48:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.606 13:48:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:21.606 13:48:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:21.606 13:48:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:21.606 13:48:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:21.606 13:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.606 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:19:22.541 nvme0n1 00:19:22.541 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.541 13:48:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.541 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.541 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.541 13:48:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:22.541 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.541 13:48:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.541 13:48:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.541 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.541 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.541 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.541 13:48:25 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:22.541 13:48:25 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.541 13:48:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.541 13:48:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:22.541 13:48:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.541 13:48:25 -- host/auth.sh@44 -- # digest=sha384 00:19:22.541 13:48:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:22.541 13:48:25 -- host/auth.sh@44 -- # keyid=0 00:19:22.541 13:48:25 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:22.541 13:48:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:22.541 13:48:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:22.541 13:48:25 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:22.541 13:48:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:19:22.541 13:48:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.541 13:48:25 -- host/auth.sh@68 -- # digest=sha384 00:19:22.541 13:48:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:22.541 13:48:25 -- host/auth.sh@68 -- # keyid=0 00:19:22.541 13:48:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:22.541 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.541 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.541 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.541 13:48:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.541 13:48:25 -- nvmf/common.sh@717 -- # local ip 00:19:22.541 13:48:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.541 13:48:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.541 13:48:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.541 13:48:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.541 13:48:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:22.541 13:48:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.541 13:48:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:22.541 13:48:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:22.541 13:48:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:22.541 13:48:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:22.541 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.541 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.541 nvme0n1 00:19:22.541 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.541 13:48:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.541 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.541 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.541 13:48:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:22.541 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.541 13:48:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.541 13:48:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.541 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.541 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.801 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.801 13:48:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.801 13:48:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:22.801 13:48:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.801 13:48:25 -- host/auth.sh@44 -- # digest=sha384 00:19:22.801 13:48:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:22.801 13:48:25 -- host/auth.sh@44 -- # keyid=1 00:19:22.801 13:48:25 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:22.801 13:48:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:22.801 13:48:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:22.801 13:48:25 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:22.801 13:48:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:19:22.801 13:48:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.801 13:48:25 -- host/auth.sh@68 -- # digest=sha384 00:19:22.801 13:48:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:22.801 13:48:25 -- host/auth.sh@68 -- # keyid=1 00:19:22.801 13:48:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:22.801 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.801 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.801 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.801 13:48:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.801 13:48:25 -- nvmf/common.sh@717 -- # local ip 00:19:22.801 13:48:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.801 13:48:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.801 13:48:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.801 13:48:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.801 13:48:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:22.801 13:48:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.801 13:48:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:22.801 13:48:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:22.801 13:48:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:22.801 13:48:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:22.801 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.801 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.801 nvme0n1 00:19:22.801 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.801 13:48:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.801 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.801 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.801 13:48:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:22.801 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.801 13:48:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.801 13:48:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.801 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.801 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.801 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.801 13:48:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.801 13:48:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:22.801 13:48:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.801 13:48:25 -- host/auth.sh@44 -- # digest=sha384 00:19:22.801 13:48:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:22.801 13:48:25 -- host/auth.sh@44 -- # keyid=2 00:19:22.801 13:48:25 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:22.801 13:48:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:22.801 13:48:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:22.801 13:48:25 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:22.801 13:48:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:19:22.801 13:48:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.801 13:48:25 -- host/auth.sh@68 -- # digest=sha384 00:19:22.801 13:48:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:22.801 13:48:25 -- host/auth.sh@68 -- # keyid=2 00:19:22.801 13:48:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:22.801 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.801 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:22.801 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.801 13:48:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.801 13:48:25 -- nvmf/common.sh@717 -- # local ip 00:19:22.801 13:48:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.801 13:48:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.801 13:48:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.801 13:48:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.801 13:48:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:22.801 13:48:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.801 13:48:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:22.801 13:48:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:22.801 13:48:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:22.801 13:48:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:22.801 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.801 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:23.061 nvme0n1 00:19:23.061 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.061 13:48:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.061 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.061 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:23.061 13:48:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.061 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.061 13:48:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.061 13:48:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.061 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.061 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:23.061 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.061 13:48:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.061 13:48:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:23.061 13:48:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.061 13:48:25 -- host/auth.sh@44 -- # digest=sha384 00:19:23.061 13:48:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:23.061 13:48:25 -- host/auth.sh@44 -- # keyid=3 00:19:23.061 13:48:25 -- host/auth.sh@45 -- # key=DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:23.061 13:48:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:23.061 13:48:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:23.061 13:48:25 -- host/auth.sh@49 -- # echo DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:23.061 13:48:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:19:23.061 13:48:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.061 13:48:25 -- host/auth.sh@68 -- # digest=sha384 00:19:23.061 13:48:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:23.061 13:48:25 -- host/auth.sh@68 -- # keyid=3 00:19:23.061 13:48:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:23.061 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.061 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:23.061 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.061 13:48:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.061 13:48:25 -- nvmf/common.sh@717 -- # local ip 00:19:23.061 13:48:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.061 13:48:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.061 13:48:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.061 13:48:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.061 13:48:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:23.061 13:48:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.061 13:48:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:23.061 13:48:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:23.061 13:48:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:23.061 13:48:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:23.061 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.061 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:23.319 nvme0n1 00:19:23.319 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.319 13:48:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.319 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.319 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:23.319 13:48:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.319 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.319 13:48:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.319 13:48:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.319 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.319 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:23.319 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.319 13:48:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.319 13:48:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:23.319 13:48:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.319 13:48:25 -- host/auth.sh@44 -- # digest=sha384 00:19:23.319 13:48:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:23.319 13:48:25 -- host/auth.sh@44 -- # keyid=4 00:19:23.319 13:48:25 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:23.319 13:48:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:23.319 13:48:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:23.319 13:48:25 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:23.319 13:48:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:19:23.319 13:48:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.319 13:48:25 -- host/auth.sh@68 -- # digest=sha384 00:19:23.319 13:48:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:23.319 13:48:25 -- host/auth.sh@68 -- # keyid=4 00:19:23.319 13:48:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:23.319 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.319 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:23.319 13:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.319 13:48:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.319 13:48:25 -- nvmf/common.sh@717 -- # local ip 00:19:23.319 13:48:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.319 13:48:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.319 13:48:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.319 13:48:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.319 13:48:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:23.319 13:48:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.319 13:48:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:23.319 13:48:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:23.319 13:48:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:23.319 13:48:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:23.319 13:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.319 13:48:25 -- common/autotest_common.sh@10 -- # set +x 00:19:23.319 nvme0n1 00:19:23.319 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.319 13:48:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.319 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.319 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.319 13:48:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.319 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.577 13:48:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.577 13:48:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.577 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.577 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.577 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.577 13:48:26 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.577 13:48:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.577 13:48:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:23.577 13:48:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.577 13:48:26 -- host/auth.sh@44 -- # digest=sha384 00:19:23.577 13:48:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:23.577 13:48:26 -- host/auth.sh@44 -- # keyid=0 00:19:23.577 13:48:26 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:23.577 13:48:26 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:23.577 13:48:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:23.577 13:48:26 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:23.577 13:48:26 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:19:23.577 13:48:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.577 13:48:26 -- host/auth.sh@68 -- # digest=sha384 00:19:23.577 13:48:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:23.577 13:48:26 -- host/auth.sh@68 -- # keyid=0 00:19:23.577 13:48:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:23.577 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.577 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.577 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.577 13:48:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.577 13:48:26 -- nvmf/common.sh@717 -- # local ip 00:19:23.577 13:48:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.577 13:48:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.577 13:48:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.577 13:48:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.577 13:48:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:23.577 13:48:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.577 13:48:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:23.577 13:48:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:23.577 13:48:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:23.577 13:48:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:23.577 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.577 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.577 nvme0n1 00:19:23.577 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.577 13:48:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.577 13:48:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.577 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.577 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.577 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.577 13:48:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.577 13:48:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.577 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.577 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.836 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.836 13:48:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.836 13:48:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:23.836 13:48:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.836 13:48:26 -- host/auth.sh@44 -- # digest=sha384 00:19:23.836 13:48:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:23.836 13:48:26 -- host/auth.sh@44 -- # keyid=1 00:19:23.836 13:48:26 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:23.836 13:48:26 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:23.836 13:48:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:23.836 13:48:26 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:23.836 13:48:26 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:19:23.836 13:48:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.836 13:48:26 -- host/auth.sh@68 -- # digest=sha384 00:19:23.836 13:48:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:23.836 13:48:26 -- host/auth.sh@68 -- # keyid=1 00:19:23.836 13:48:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:23.836 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.836 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.836 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.836 13:48:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.836 13:48:26 -- nvmf/common.sh@717 -- # local ip 00:19:23.836 13:48:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.836 13:48:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.836 13:48:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.836 13:48:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.836 13:48:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:23.836 13:48:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.836 13:48:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:23.836 13:48:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:23.836 13:48:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:23.836 13:48:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:23.836 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.836 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.836 nvme0n1 00:19:23.836 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.836 13:48:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.836 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.836 13:48:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.836 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.836 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.836 13:48:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.836 13:48:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.836 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.836 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.836 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.836 13:48:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.836 13:48:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:23.836 13:48:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.836 13:48:26 -- host/auth.sh@44 -- # digest=sha384 00:19:23.836 13:48:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:23.836 13:48:26 -- host/auth.sh@44 -- # keyid=2 00:19:23.836 13:48:26 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:23.836 13:48:26 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:23.836 13:48:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:23.836 13:48:26 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:23.836 13:48:26 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:19:23.836 13:48:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.836 13:48:26 -- host/auth.sh@68 -- # digest=sha384 00:19:23.836 13:48:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:23.836 13:48:26 -- host/auth.sh@68 -- # keyid=2 00:19:23.836 13:48:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:23.836 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.836 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:23.836 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.836 13:48:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.836 13:48:26 -- nvmf/common.sh@717 -- # local ip 00:19:23.836 13:48:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.836 13:48:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.836 13:48:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.836 13:48:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.836 13:48:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:23.836 13:48:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.837 13:48:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:23.837 13:48:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:23.837 13:48:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:23.837 13:48:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:23.837 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.837 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:24.096 nvme0n1 00:19:24.096 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.096 13:48:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.096 13:48:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:24.096 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.096 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:24.096 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.096 13:48:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.096 13:48:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.096 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.096 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:24.096 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.096 13:48:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:24.096 13:48:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:24.096 13:48:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:24.097 13:48:26 -- host/auth.sh@44 -- # digest=sha384 00:19:24.097 13:48:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:24.097 13:48:26 -- host/auth.sh@44 -- # keyid=3 00:19:24.097 13:48:26 -- host/auth.sh@45 -- # key=DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:24.097 13:48:26 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:24.097 13:48:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:24.097 13:48:26 -- host/auth.sh@49 -- # echo DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:24.097 13:48:26 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:19:24.097 13:48:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:24.097 13:48:26 -- host/auth.sh@68 -- # digest=sha384 00:19:24.097 13:48:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:24.097 13:48:26 -- host/auth.sh@68 -- # keyid=3 00:19:24.097 13:48:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:24.097 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.097 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:24.097 13:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.097 13:48:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:24.097 13:48:26 -- nvmf/common.sh@717 -- # local ip 00:19:24.097 13:48:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:24.097 13:48:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:24.097 13:48:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.097 13:48:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.097 13:48:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:24.097 13:48:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.097 13:48:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:24.097 13:48:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:24.097 13:48:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:24.097 13:48:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:24.097 13:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.097 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:19:24.356 nvme0n1 00:19:24.357 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.357 13:48:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.357 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.357 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:24.357 13:48:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:24.357 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.357 13:48:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.357 13:48:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.357 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.357 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:24.357 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.357 13:48:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:24.357 13:48:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:24.357 13:48:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:24.357 13:48:27 -- host/auth.sh@44 -- # digest=sha384 00:19:24.357 13:48:27 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:24.357 13:48:27 -- host/auth.sh@44 -- # keyid=4 00:19:24.357 13:48:27 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:24.357 13:48:27 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:24.357 13:48:27 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:24.357 13:48:27 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:24.357 13:48:27 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:19:24.357 13:48:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:24.357 13:48:27 -- host/auth.sh@68 -- # digest=sha384 00:19:24.357 13:48:27 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:24.357 13:48:27 -- host/auth.sh@68 -- # keyid=4 00:19:24.357 13:48:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:24.357 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.357 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:24.357 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.357 13:48:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:24.357 13:48:27 -- nvmf/common.sh@717 -- # local ip 00:19:24.357 13:48:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:24.357 13:48:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:24.357 13:48:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.357 13:48:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.357 13:48:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:24.357 13:48:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.357 13:48:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:24.357 13:48:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:24.357 13:48:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:24.357 13:48:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:24.357 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.357 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:24.617 nvme0n1 00:19:24.617 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.617 13:48:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.617 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.617 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:24.617 13:48:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:24.617 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.617 13:48:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.617 13:48:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.617 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.617 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:24.617 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.617 13:48:27 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.617 13:48:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:24.617 13:48:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:24.617 13:48:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:24.617 13:48:27 -- host/auth.sh@44 -- # digest=sha384 00:19:24.617 13:48:27 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:24.617 13:48:27 -- host/auth.sh@44 -- # keyid=0 00:19:24.617 13:48:27 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:24.617 13:48:27 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:24.617 13:48:27 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:24.617 13:48:27 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:24.617 13:48:27 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:19:24.617 13:48:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:24.617 13:48:27 -- host/auth.sh@68 -- # digest=sha384 00:19:24.617 13:48:27 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:24.617 13:48:27 -- host/auth.sh@68 -- # keyid=0 00:19:24.617 13:48:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:24.617 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.617 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:24.617 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.617 13:48:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:24.617 13:48:27 -- nvmf/common.sh@717 -- # local ip 00:19:24.617 13:48:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:24.617 13:48:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:24.617 13:48:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.617 13:48:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.617 13:48:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:24.617 13:48:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.617 13:48:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:24.617 13:48:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:24.617 13:48:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:24.617 13:48:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:24.617 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.617 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:24.875 nvme0n1 00:19:24.875 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.875 13:48:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.875 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.875 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:24.875 13:48:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:25.134 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.134 13:48:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.134 13:48:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.134 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.134 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:25.134 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.134 13:48:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:25.134 13:48:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:25.134 13:48:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:25.134 13:48:27 -- host/auth.sh@44 -- # digest=sha384 00:19:25.134 13:48:27 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:25.134 13:48:27 -- host/auth.sh@44 -- # keyid=1 00:19:25.134 13:48:27 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:25.134 13:48:27 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:25.134 13:48:27 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:25.134 13:48:27 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:25.134 13:48:27 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:19:25.134 13:48:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:25.134 13:48:27 -- host/auth.sh@68 -- # digest=sha384 00:19:25.134 13:48:27 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:25.134 13:48:27 -- host/auth.sh@68 -- # keyid=1 00:19:25.134 13:48:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:25.134 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.134 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:25.134 13:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.134 13:48:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:25.134 13:48:27 -- nvmf/common.sh@717 -- # local ip 00:19:25.134 13:48:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:25.134 13:48:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:25.134 13:48:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.134 13:48:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.134 13:48:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:25.134 13:48:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.134 13:48:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:25.134 13:48:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:25.134 13:48:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:25.134 13:48:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:25.134 13:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.134 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:19:25.392 nvme0n1 00:19:25.392 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.392 13:48:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.392 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.392 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.392 13:48:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:25.392 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.392 13:48:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.392 13:48:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.392 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.392 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.392 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.392 13:48:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:25.392 13:48:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:25.392 13:48:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:25.392 13:48:28 -- host/auth.sh@44 -- # digest=sha384 00:19:25.392 13:48:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:25.392 13:48:28 -- host/auth.sh@44 -- # keyid=2 00:19:25.392 13:48:28 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:25.392 13:48:28 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:25.392 13:48:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:25.392 13:48:28 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:25.392 13:48:28 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:19:25.392 13:48:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:25.392 13:48:28 -- host/auth.sh@68 -- # digest=sha384 00:19:25.392 13:48:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:25.392 13:48:28 -- host/auth.sh@68 -- # keyid=2 00:19:25.392 13:48:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:25.392 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.392 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.392 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.392 13:48:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:25.392 13:48:28 -- nvmf/common.sh@717 -- # local ip 00:19:25.392 13:48:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:25.392 13:48:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:25.392 13:48:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.392 13:48:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.392 13:48:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:25.392 13:48:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.392 13:48:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:25.392 13:48:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:25.392 13:48:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:25.392 13:48:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:25.392 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.392 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.651 nvme0n1 00:19:25.651 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.651 13:48:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.651 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.651 13:48:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:25.651 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.651 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.651 13:48:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.651 13:48:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.651 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.651 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.651 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.651 13:48:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:25.651 13:48:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:25.651 13:48:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:25.651 13:48:28 -- host/auth.sh@44 -- # digest=sha384 00:19:25.651 13:48:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:25.651 13:48:28 -- host/auth.sh@44 -- # keyid=3 00:19:25.651 13:48:28 -- host/auth.sh@45 -- # key=DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:25.651 13:48:28 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:25.651 13:48:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:25.651 13:48:28 -- host/auth.sh@49 -- # echo DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:25.651 13:48:28 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:19:25.651 13:48:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:25.651 13:48:28 -- host/auth.sh@68 -- # digest=sha384 00:19:25.651 13:48:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:25.651 13:48:28 -- host/auth.sh@68 -- # keyid=3 00:19:25.651 13:48:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:25.651 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.651 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.651 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.651 13:48:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:25.651 13:48:28 -- nvmf/common.sh@717 -- # local ip 00:19:25.651 13:48:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:25.651 13:48:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:25.651 13:48:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.651 13:48:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.651 13:48:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:25.651 13:48:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.651 13:48:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:25.651 13:48:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:25.651 13:48:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:25.651 13:48:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:25.651 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.651 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.910 nvme0n1 00:19:25.911 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.911 13:48:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.911 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.911 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.911 13:48:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:25.911 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.911 13:48:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.911 13:48:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.911 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.911 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.911 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.911 13:48:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:25.911 13:48:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:25.911 13:48:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:25.911 13:48:28 -- host/auth.sh@44 -- # digest=sha384 00:19:25.911 13:48:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:25.911 13:48:28 -- host/auth.sh@44 -- # keyid=4 00:19:25.911 13:48:28 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:25.911 13:48:28 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:25.911 13:48:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:25.911 13:48:28 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:25.911 13:48:28 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:19:25.911 13:48:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:25.911 13:48:28 -- host/auth.sh@68 -- # digest=sha384 00:19:25.911 13:48:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:25.911 13:48:28 -- host/auth.sh@68 -- # keyid=4 00:19:25.911 13:48:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:25.911 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.911 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.911 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.911 13:48:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:25.911 13:48:28 -- nvmf/common.sh@717 -- # local ip 00:19:25.911 13:48:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:25.911 13:48:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:25.911 13:48:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.911 13:48:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.911 13:48:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:25.911 13:48:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.911 13:48:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:25.911 13:48:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:25.911 13:48:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:25.911 13:48:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:25.911 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.911 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:26.477 nvme0n1 00:19:26.477 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.477 13:48:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.477 13:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.477 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:19:26.477 13:48:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:26.477 13:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.477 13:48:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.477 13:48:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.477 13:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.477 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:19:26.477 13:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.477 13:48:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.477 13:48:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:26.477 13:48:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:26.477 13:48:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:26.477 13:48:29 -- host/auth.sh@44 -- # digest=sha384 00:19:26.477 13:48:29 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:26.477 13:48:29 -- host/auth.sh@44 -- # keyid=0 00:19:26.477 13:48:29 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:26.477 13:48:29 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:26.477 13:48:29 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:26.477 13:48:29 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:26.477 13:48:29 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:19:26.477 13:48:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:26.477 13:48:29 -- host/auth.sh@68 -- # digest=sha384 00:19:26.477 13:48:29 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:26.477 13:48:29 -- host/auth.sh@68 -- # keyid=0 00:19:26.477 13:48:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:26.477 13:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.477 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:19:26.477 13:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.477 13:48:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:26.477 13:48:29 -- nvmf/common.sh@717 -- # local ip 00:19:26.477 13:48:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:26.477 13:48:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:26.477 13:48:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.477 13:48:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.477 13:48:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:26.477 13:48:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.477 13:48:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:26.477 13:48:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:26.477 13:48:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:26.477 13:48:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:26.477 13:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.477 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:19:27.073 nvme0n1 00:19:27.073 13:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.073 13:48:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.073 13:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.073 13:48:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:27.073 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:19:27.073 13:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.073 13:48:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.073 13:48:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.073 13:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.073 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:19:27.073 13:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.073 13:48:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:27.073 13:48:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:27.073 13:48:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:27.073 13:48:29 -- host/auth.sh@44 -- # digest=sha384 00:19:27.073 13:48:29 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:27.073 13:48:29 -- host/auth.sh@44 -- # keyid=1 00:19:27.073 13:48:29 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:27.073 13:48:29 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:27.073 13:48:29 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:27.073 13:48:29 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:27.073 13:48:29 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:19:27.073 13:48:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:27.073 13:48:29 -- host/auth.sh@68 -- # digest=sha384 00:19:27.073 13:48:29 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:27.073 13:48:29 -- host/auth.sh@68 -- # keyid=1 00:19:27.073 13:48:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:27.073 13:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.073 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:19:27.073 13:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.073 13:48:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:27.073 13:48:29 -- nvmf/common.sh@717 -- # local ip 00:19:27.073 13:48:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:27.073 13:48:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:27.073 13:48:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.073 13:48:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.073 13:48:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:27.073 13:48:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.073 13:48:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:27.073 13:48:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:27.073 13:48:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:27.073 13:48:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:27.073 13:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.073 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:19:27.643 nvme0n1 00:19:27.643 13:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.643 13:48:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.643 13:48:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:27.643 13:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.643 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:19:27.643 13:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.643 13:48:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.643 13:48:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.643 13:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.643 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:19:27.643 13:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.643 13:48:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:27.643 13:48:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:27.643 13:48:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:27.643 13:48:30 -- host/auth.sh@44 -- # digest=sha384 00:19:27.643 13:48:30 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:27.643 13:48:30 -- host/auth.sh@44 -- # keyid=2 00:19:27.643 13:48:30 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:27.643 13:48:30 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:27.643 13:48:30 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:27.643 13:48:30 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:27.643 13:48:30 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:19:27.643 13:48:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:27.643 13:48:30 -- host/auth.sh@68 -- # digest=sha384 00:19:27.643 13:48:30 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:27.643 13:48:30 -- host/auth.sh@68 -- # keyid=2 00:19:27.643 13:48:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:27.643 13:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.643 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:19:27.643 13:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.643 13:48:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:27.643 13:48:30 -- nvmf/common.sh@717 -- # local ip 00:19:27.643 13:48:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:27.643 13:48:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:27.643 13:48:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.643 13:48:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.643 13:48:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:27.643 13:48:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.643 13:48:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:27.643 13:48:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:27.643 13:48:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:27.643 13:48:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:27.643 13:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.643 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:19:28.212 nvme0n1 00:19:28.212 13:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.212 13:48:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.212 13:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.212 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:19:28.212 13:48:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:28.212 13:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.212 13:48:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.212 13:48:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.212 13:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.212 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:19:28.212 13:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.212 13:48:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:28.212 13:48:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:28.212 13:48:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:28.212 13:48:30 -- host/auth.sh@44 -- # digest=sha384 00:19:28.212 13:48:30 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:28.212 13:48:30 -- host/auth.sh@44 -- # keyid=3 00:19:28.212 13:48:30 -- host/auth.sh@45 -- # key=DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:28.212 13:48:30 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:28.212 13:48:30 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:28.212 13:48:30 -- host/auth.sh@49 -- # echo DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:28.212 13:48:30 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:19:28.212 13:48:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:28.212 13:48:30 -- host/auth.sh@68 -- # digest=sha384 00:19:28.212 13:48:30 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:28.212 13:48:30 -- host/auth.sh@68 -- # keyid=3 00:19:28.212 13:48:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.212 13:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.212 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:19:28.212 13:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.212 13:48:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:28.212 13:48:30 -- nvmf/common.sh@717 -- # local ip 00:19:28.212 13:48:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:28.212 13:48:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:28.212 13:48:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.212 13:48:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.212 13:48:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:28.212 13:48:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.212 13:48:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:28.212 13:48:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:28.212 13:48:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:28.212 13:48:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:28.212 13:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.212 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:19:28.776 nvme0n1 00:19:28.776 13:48:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.776 13:48:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.776 13:48:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.776 13:48:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:28.776 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:19:28.776 13:48:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.776 13:48:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.776 13:48:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.776 13:48:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.776 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:19:28.776 13:48:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.776 13:48:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:28.776 13:48:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:28.776 13:48:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:28.776 13:48:31 -- host/auth.sh@44 -- # digest=sha384 00:19:28.776 13:48:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:28.776 13:48:31 -- host/auth.sh@44 -- # keyid=4 00:19:28.776 13:48:31 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:28.776 13:48:31 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:28.776 13:48:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:28.776 13:48:31 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:28.776 13:48:31 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:19:28.776 13:48:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:28.776 13:48:31 -- host/auth.sh@68 -- # digest=sha384 00:19:28.776 13:48:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:28.776 13:48:31 -- host/auth.sh@68 -- # keyid=4 00:19:28.776 13:48:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.776 13:48:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.776 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:19:28.776 13:48:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.776 13:48:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:28.776 13:48:31 -- nvmf/common.sh@717 -- # local ip 00:19:28.776 13:48:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:28.776 13:48:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:28.777 13:48:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.777 13:48:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.777 13:48:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:28.777 13:48:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.777 13:48:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:28.777 13:48:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:28.777 13:48:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:28.777 13:48:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:28.777 13:48:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.777 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:19:29.343 nvme0n1 00:19:29.343 13:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.343 13:48:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.343 13:48:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:29.343 13:48:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.343 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:19:29.343 13:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.343 13:48:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.343 13:48:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.343 13:48:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.343 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:19:29.343 13:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.343 13:48:32 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.343 13:48:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:29.343 13:48:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:29.343 13:48:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:29.343 13:48:32 -- host/auth.sh@44 -- # digest=sha384 00:19:29.343 13:48:32 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:29.343 13:48:32 -- host/auth.sh@44 -- # keyid=0 00:19:29.343 13:48:32 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:29.343 13:48:32 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:29.343 13:48:32 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:29.343 13:48:32 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:29.343 13:48:32 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:19:29.343 13:48:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:29.343 13:48:32 -- host/auth.sh@68 -- # digest=sha384 00:19:29.343 13:48:32 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:29.343 13:48:32 -- host/auth.sh@68 -- # keyid=0 00:19:29.343 13:48:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:29.343 13:48:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.343 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:19:29.343 13:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.343 13:48:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:29.343 13:48:32 -- nvmf/common.sh@717 -- # local ip 00:19:29.343 13:48:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:29.343 13:48:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:29.343 13:48:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.343 13:48:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.343 13:48:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:29.343 13:48:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.343 13:48:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:29.343 13:48:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:29.343 13:48:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:29.343 13:48:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:29.343 13:48:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.343 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:19:30.717 nvme0n1 00:19:30.717 13:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.717 13:48:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.717 13:48:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:30.717 13:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.717 13:48:33 -- common/autotest_common.sh@10 -- # set +x 00:19:30.717 13:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.717 13:48:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.717 13:48:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.717 13:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.717 13:48:33 -- common/autotest_common.sh@10 -- # set +x 00:19:30.717 13:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.717 13:48:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:30.717 13:48:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:30.717 13:48:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:30.717 13:48:33 -- host/auth.sh@44 -- # digest=sha384 00:19:30.717 13:48:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:30.717 13:48:33 -- host/auth.sh@44 -- # keyid=1 00:19:30.717 13:48:33 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:30.717 13:48:33 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:30.717 13:48:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:30.718 13:48:33 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:30.718 13:48:33 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:19:30.718 13:48:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:30.718 13:48:33 -- host/auth.sh@68 -- # digest=sha384 00:19:30.718 13:48:33 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:30.718 13:48:33 -- host/auth.sh@68 -- # keyid=1 00:19:30.718 13:48:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:30.718 13:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.718 13:48:33 -- common/autotest_common.sh@10 -- # set +x 00:19:30.718 13:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.718 13:48:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:30.718 13:48:33 -- nvmf/common.sh@717 -- # local ip 00:19:30.718 13:48:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:30.718 13:48:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:30.718 13:48:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.718 13:48:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.718 13:48:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:30.718 13:48:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.718 13:48:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:30.718 13:48:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:30.718 13:48:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:30.718 13:48:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:30.718 13:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.718 13:48:33 -- common/autotest_common.sh@10 -- # set +x 00:19:31.655 nvme0n1 00:19:31.655 13:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.655 13:48:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.655 13:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.655 13:48:34 -- common/autotest_common.sh@10 -- # set +x 00:19:31.655 13:48:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:31.655 13:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.655 13:48:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.655 13:48:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.655 13:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.655 13:48:34 -- common/autotest_common.sh@10 -- # set +x 00:19:31.655 13:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.655 13:48:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:31.655 13:48:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:31.655 13:48:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:31.655 13:48:34 -- host/auth.sh@44 -- # digest=sha384 00:19:31.655 13:48:34 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:31.655 13:48:34 -- host/auth.sh@44 -- # keyid=2 00:19:31.655 13:48:34 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:31.655 13:48:34 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:31.655 13:48:34 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:31.655 13:48:34 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:31.655 13:48:34 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:19:31.655 13:48:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:31.655 13:48:34 -- host/auth.sh@68 -- # digest=sha384 00:19:31.655 13:48:34 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:31.655 13:48:34 -- host/auth.sh@68 -- # keyid=2 00:19:31.655 13:48:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:31.655 13:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.655 13:48:34 -- common/autotest_common.sh@10 -- # set +x 00:19:31.655 13:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.655 13:48:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:31.655 13:48:34 -- nvmf/common.sh@717 -- # local ip 00:19:31.655 13:48:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:31.655 13:48:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:31.655 13:48:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.655 13:48:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.655 13:48:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:31.655 13:48:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.655 13:48:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:31.655 13:48:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:31.655 13:48:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:31.655 13:48:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:31.655 13:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.655 13:48:34 -- common/autotest_common.sh@10 -- # set +x 00:19:32.591 nvme0n1 00:19:32.591 13:48:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.591 13:48:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.591 13:48:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:32.591 13:48:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.591 13:48:35 -- common/autotest_common.sh@10 -- # set +x 00:19:32.591 13:48:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.591 13:48:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.591 13:48:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.591 13:48:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.591 13:48:35 -- common/autotest_common.sh@10 -- # set +x 00:19:32.591 13:48:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.591 13:48:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:32.591 13:48:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:32.591 13:48:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:32.591 13:48:35 -- host/auth.sh@44 -- # digest=sha384 00:19:32.591 13:48:35 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:32.591 13:48:35 -- host/auth.sh@44 -- # keyid=3 00:19:32.591 13:48:35 -- host/auth.sh@45 -- # key=DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:32.591 13:48:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:32.591 13:48:35 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:32.591 13:48:35 -- host/auth.sh@49 -- # echo DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:32.591 13:48:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:19:32.591 13:48:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:32.591 13:48:35 -- host/auth.sh@68 -- # digest=sha384 00:19:32.591 13:48:35 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:32.591 13:48:35 -- host/auth.sh@68 -- # keyid=3 00:19:32.591 13:48:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:32.591 13:48:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.591 13:48:35 -- common/autotest_common.sh@10 -- # set +x 00:19:32.591 13:48:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.591 13:48:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:32.591 13:48:35 -- nvmf/common.sh@717 -- # local ip 00:19:32.591 13:48:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:32.591 13:48:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:32.591 13:48:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.591 13:48:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.591 13:48:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:32.591 13:48:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.591 13:48:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:32.591 13:48:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:32.591 13:48:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:32.591 13:48:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:32.591 13:48:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.591 13:48:35 -- common/autotest_common.sh@10 -- # set +x 00:19:33.965 nvme0n1 00:19:33.965 13:48:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.965 13:48:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.965 13:48:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.965 13:48:36 -- common/autotest_common.sh@10 -- # set +x 00:19:33.966 13:48:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:33.966 13:48:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.966 13:48:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.966 13:48:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.966 13:48:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.966 13:48:36 -- common/autotest_common.sh@10 -- # set +x 00:19:33.966 13:48:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.966 13:48:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:33.966 13:48:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:33.966 13:48:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:33.966 13:48:36 -- host/auth.sh@44 -- # digest=sha384 00:19:33.966 13:48:36 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:33.966 13:48:36 -- host/auth.sh@44 -- # keyid=4 00:19:33.966 13:48:36 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:33.966 13:48:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:33.966 13:48:36 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:33.966 13:48:36 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:33.966 13:48:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:19:33.966 13:48:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:33.966 13:48:36 -- host/auth.sh@68 -- # digest=sha384 00:19:33.966 13:48:36 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:33.966 13:48:36 -- host/auth.sh@68 -- # keyid=4 00:19:33.966 13:48:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:33.966 13:48:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.966 13:48:36 -- common/autotest_common.sh@10 -- # set +x 00:19:33.966 13:48:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.966 13:48:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:33.966 13:48:36 -- nvmf/common.sh@717 -- # local ip 00:19:33.966 13:48:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:33.966 13:48:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:33.966 13:48:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.966 13:48:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.966 13:48:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:33.966 13:48:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.966 13:48:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:33.966 13:48:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:33.966 13:48:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:33.966 13:48:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:33.966 13:48:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.966 13:48:36 -- common/autotest_common.sh@10 -- # set +x 00:19:34.902 nvme0n1 00:19:34.902 13:48:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.902 13:48:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.902 13:48:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:34.902 13:48:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.902 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:19:34.902 13:48:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.902 13:48:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.902 13:48:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.902 13:48:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.902 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:19:34.902 13:48:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.902 13:48:37 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:34.902 13:48:37 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.902 13:48:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:34.902 13:48:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:34.902 13:48:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:34.902 13:48:37 -- host/auth.sh@44 -- # digest=sha512 00:19:34.902 13:48:37 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:34.902 13:48:37 -- host/auth.sh@44 -- # keyid=0 00:19:34.902 13:48:37 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:34.902 13:48:37 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:34.902 13:48:37 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:34.902 13:48:37 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:34.902 13:48:37 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:19:34.902 13:48:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:34.902 13:48:37 -- host/auth.sh@68 -- # digest=sha512 00:19:34.902 13:48:37 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:34.902 13:48:37 -- host/auth.sh@68 -- # keyid=0 00:19:34.902 13:48:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.902 13:48:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.902 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:19:34.902 13:48:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.902 13:48:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:34.902 13:48:37 -- nvmf/common.sh@717 -- # local ip 00:19:34.902 13:48:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:34.902 13:48:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:34.902 13:48:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.902 13:48:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.902 13:48:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:34.902 13:48:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.902 13:48:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:34.902 13:48:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:34.902 13:48:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:34.902 13:48:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:34.902 13:48:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.902 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:19:34.902 nvme0n1 00:19:34.902 13:48:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.902 13:48:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.902 13:48:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.902 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:19:34.902 13:48:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:34.902 13:48:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.902 13:48:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.902 13:48:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.902 13:48:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.902 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:19:34.902 13:48:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.902 13:48:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:34.902 13:48:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:34.902 13:48:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:34.902 13:48:37 -- host/auth.sh@44 -- # digest=sha512 00:19:34.902 13:48:37 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:34.902 13:48:37 -- host/auth.sh@44 -- # keyid=1 00:19:34.902 13:48:37 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:34.902 13:48:37 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:34.902 13:48:37 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:34.902 13:48:37 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:34.902 13:48:37 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:19:34.902 13:48:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:34.902 13:48:37 -- host/auth.sh@68 -- # digest=sha512 00:19:34.902 13:48:37 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:34.902 13:48:37 -- host/auth.sh@68 -- # keyid=1 00:19:34.902 13:48:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.902 13:48:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.902 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:19:34.902 13:48:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.902 13:48:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:34.902 13:48:37 -- nvmf/common.sh@717 -- # local ip 00:19:34.902 13:48:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:34.902 13:48:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:34.902 13:48:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.902 13:48:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.902 13:48:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:34.902 13:48:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.902 13:48:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:34.902 13:48:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:34.902 13:48:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:34.902 13:48:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:34.902 13:48:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.902 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:19:35.161 nvme0n1 00:19:35.161 13:48:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.161 13:48:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.161 13:48:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.161 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:19:35.161 13:48:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:35.161 13:48:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.161 13:48:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.161 13:48:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.161 13:48:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.161 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:19:35.161 13:48:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.161 13:48:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:35.161 13:48:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:35.161 13:48:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:35.161 13:48:37 -- host/auth.sh@44 -- # digest=sha512 00:19:35.161 13:48:37 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:35.161 13:48:37 -- host/auth.sh@44 -- # keyid=2 00:19:35.161 13:48:37 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:35.161 13:48:37 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:35.161 13:48:37 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:35.161 13:48:37 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:35.161 13:48:37 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:19:35.161 13:48:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:35.161 13:48:37 -- host/auth.sh@68 -- # digest=sha512 00:19:35.161 13:48:37 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:35.161 13:48:37 -- host/auth.sh@68 -- # keyid=2 00:19:35.161 13:48:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:35.161 13:48:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.161 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:19:35.161 13:48:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.161 13:48:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:35.161 13:48:37 -- nvmf/common.sh@717 -- # local ip 00:19:35.161 13:48:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:35.161 13:48:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:35.161 13:48:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.161 13:48:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.161 13:48:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:35.161 13:48:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.161 13:48:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:35.161 13:48:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:35.161 13:48:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:35.161 13:48:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:35.161 13:48:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.161 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:19:35.419 nvme0n1 00:19:35.419 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.419 13:48:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.419 13:48:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:35.419 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.419 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.419 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.419 13:48:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.419 13:48:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.419 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.419 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.419 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.419 13:48:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:35.419 13:48:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:35.419 13:48:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:35.419 13:48:38 -- host/auth.sh@44 -- # digest=sha512 00:19:35.419 13:48:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:35.419 13:48:38 -- host/auth.sh@44 -- # keyid=3 00:19:35.419 13:48:38 -- host/auth.sh@45 -- # key=DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:35.419 13:48:38 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:35.419 13:48:38 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:35.419 13:48:38 -- host/auth.sh@49 -- # echo DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:35.419 13:48:38 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:19:35.419 13:48:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:35.419 13:48:38 -- host/auth.sh@68 -- # digest=sha512 00:19:35.419 13:48:38 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:35.419 13:48:38 -- host/auth.sh@68 -- # keyid=3 00:19:35.419 13:48:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:35.419 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.419 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.419 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.419 13:48:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:35.419 13:48:38 -- nvmf/common.sh@717 -- # local ip 00:19:35.419 13:48:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:35.419 13:48:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:35.419 13:48:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.420 13:48:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.420 13:48:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:35.420 13:48:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.420 13:48:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:35.420 13:48:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:35.420 13:48:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:35.420 13:48:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:35.420 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.420 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.420 nvme0n1 00:19:35.420 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.420 13:48:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.420 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.420 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.420 13:48:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:35.420 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.679 13:48:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.679 13:48:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.679 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.679 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.679 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.679 13:48:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:35.679 13:48:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:35.679 13:48:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:35.679 13:48:38 -- host/auth.sh@44 -- # digest=sha512 00:19:35.679 13:48:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:35.679 13:48:38 -- host/auth.sh@44 -- # keyid=4 00:19:35.679 13:48:38 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:35.679 13:48:38 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:35.679 13:48:38 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:35.679 13:48:38 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:35.679 13:48:38 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:19:35.679 13:48:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:35.679 13:48:38 -- host/auth.sh@68 -- # digest=sha512 00:19:35.679 13:48:38 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:35.679 13:48:38 -- host/auth.sh@68 -- # keyid=4 00:19:35.679 13:48:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:35.679 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.679 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.679 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.679 13:48:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:35.679 13:48:38 -- nvmf/common.sh@717 -- # local ip 00:19:35.679 13:48:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:35.679 13:48:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:35.679 13:48:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.679 13:48:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.679 13:48:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:35.679 13:48:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.679 13:48:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:35.679 13:48:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:35.679 13:48:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:35.679 13:48:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:35.679 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.679 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.679 nvme0n1 00:19:35.679 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.679 13:48:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.679 13:48:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:35.679 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.679 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.679 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.679 13:48:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.679 13:48:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.679 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.679 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.679 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.679 13:48:38 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.679 13:48:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:35.679 13:48:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:35.679 13:48:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:35.679 13:48:38 -- host/auth.sh@44 -- # digest=sha512 00:19:35.679 13:48:38 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:35.679 13:48:38 -- host/auth.sh@44 -- # keyid=0 00:19:35.679 13:48:38 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:35.679 13:48:38 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:35.680 13:48:38 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:35.680 13:48:38 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:35.680 13:48:38 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:19:35.680 13:48:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:35.680 13:48:38 -- host/auth.sh@68 -- # digest=sha512 00:19:35.680 13:48:38 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:35.680 13:48:38 -- host/auth.sh@68 -- # keyid=0 00:19:35.680 13:48:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:35.680 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.680 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.680 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.680 13:48:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:35.680 13:48:38 -- nvmf/common.sh@717 -- # local ip 00:19:35.680 13:48:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:35.680 13:48:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:35.680 13:48:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.680 13:48:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.680 13:48:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:35.680 13:48:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.680 13:48:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:35.680 13:48:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:35.680 13:48:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:35.939 13:48:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:35.939 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.939 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.939 nvme0n1 00:19:35.939 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.939 13:48:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.939 13:48:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:35.939 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.939 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.939 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.939 13:48:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.939 13:48:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.939 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.939 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.939 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.939 13:48:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:35.939 13:48:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:35.939 13:48:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:35.939 13:48:38 -- host/auth.sh@44 -- # digest=sha512 00:19:35.939 13:48:38 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:35.939 13:48:38 -- host/auth.sh@44 -- # keyid=1 00:19:35.939 13:48:38 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:35.939 13:48:38 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:35.939 13:48:38 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:35.939 13:48:38 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:35.939 13:48:38 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:19:35.939 13:48:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:35.939 13:48:38 -- host/auth.sh@68 -- # digest=sha512 00:19:35.939 13:48:38 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:35.939 13:48:38 -- host/auth.sh@68 -- # keyid=1 00:19:35.939 13:48:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:35.939 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.939 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.939 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.939 13:48:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:35.939 13:48:38 -- nvmf/common.sh@717 -- # local ip 00:19:35.939 13:48:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:35.939 13:48:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:35.939 13:48:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.939 13:48:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.939 13:48:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:35.939 13:48:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.939 13:48:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:35.939 13:48:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:35.939 13:48:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:36.197 13:48:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:36.197 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.197 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:36.197 nvme0n1 00:19:36.197 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.197 13:48:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.197 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.197 13:48:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:36.197 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:36.197 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.197 13:48:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.197 13:48:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.197 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.197 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:36.197 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.197 13:48:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:36.198 13:48:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:36.198 13:48:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:36.198 13:48:38 -- host/auth.sh@44 -- # digest=sha512 00:19:36.198 13:48:38 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:36.198 13:48:38 -- host/auth.sh@44 -- # keyid=2 00:19:36.198 13:48:38 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:36.198 13:48:38 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:36.198 13:48:38 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:36.198 13:48:38 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:36.198 13:48:38 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:19:36.198 13:48:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:36.198 13:48:38 -- host/auth.sh@68 -- # digest=sha512 00:19:36.198 13:48:38 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:36.198 13:48:38 -- host/auth.sh@68 -- # keyid=2 00:19:36.198 13:48:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.198 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.198 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:36.198 13:48:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.198 13:48:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:36.198 13:48:38 -- nvmf/common.sh@717 -- # local ip 00:19:36.198 13:48:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:36.198 13:48:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:36.198 13:48:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.198 13:48:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.198 13:48:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:36.198 13:48:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.198 13:48:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:36.198 13:48:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:36.198 13:48:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:36.198 13:48:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:36.198 13:48:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.198 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:19:36.456 nvme0n1 00:19:36.456 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.456 13:48:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.456 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.456 13:48:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:36.456 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.456 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.456 13:48:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.456 13:48:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.456 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.456 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.456 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.456 13:48:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:36.456 13:48:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:36.456 13:48:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:36.456 13:48:39 -- host/auth.sh@44 -- # digest=sha512 00:19:36.456 13:48:39 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:36.456 13:48:39 -- host/auth.sh@44 -- # keyid=3 00:19:36.456 13:48:39 -- host/auth.sh@45 -- # key=DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:36.456 13:48:39 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:36.456 13:48:39 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:36.456 13:48:39 -- host/auth.sh@49 -- # echo DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:36.456 13:48:39 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:19:36.456 13:48:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:36.456 13:48:39 -- host/auth.sh@68 -- # digest=sha512 00:19:36.456 13:48:39 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:36.456 13:48:39 -- host/auth.sh@68 -- # keyid=3 00:19:36.456 13:48:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.456 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.456 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.456 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.456 13:48:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:36.456 13:48:39 -- nvmf/common.sh@717 -- # local ip 00:19:36.456 13:48:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:36.456 13:48:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:36.456 13:48:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.456 13:48:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.456 13:48:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:36.456 13:48:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.456 13:48:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:36.456 13:48:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:36.456 13:48:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:36.456 13:48:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:36.456 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.456 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.716 nvme0n1 00:19:36.716 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.716 13:48:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.716 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.716 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.716 13:48:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:36.716 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.716 13:48:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.716 13:48:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.716 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.716 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.716 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.716 13:48:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:36.716 13:48:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:36.716 13:48:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:36.716 13:48:39 -- host/auth.sh@44 -- # digest=sha512 00:19:36.716 13:48:39 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:36.716 13:48:39 -- host/auth.sh@44 -- # keyid=4 00:19:36.716 13:48:39 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:36.716 13:48:39 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:36.716 13:48:39 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:36.716 13:48:39 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:36.716 13:48:39 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:19:36.716 13:48:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:36.716 13:48:39 -- host/auth.sh@68 -- # digest=sha512 00:19:36.716 13:48:39 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:36.716 13:48:39 -- host/auth.sh@68 -- # keyid=4 00:19:36.716 13:48:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.716 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.716 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.716 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.716 13:48:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:36.716 13:48:39 -- nvmf/common.sh@717 -- # local ip 00:19:36.716 13:48:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:36.716 13:48:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:36.716 13:48:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.716 13:48:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.716 13:48:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:36.716 13:48:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.716 13:48:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:36.716 13:48:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:36.716 13:48:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:36.716 13:48:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:36.716 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.716 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.976 nvme0n1 00:19:36.976 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.976 13:48:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.976 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.976 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.976 13:48:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:36.976 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.976 13:48:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.976 13:48:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.976 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.976 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.976 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.976 13:48:39 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.976 13:48:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:36.976 13:48:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:36.976 13:48:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:36.976 13:48:39 -- host/auth.sh@44 -- # digest=sha512 00:19:36.976 13:48:39 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:36.976 13:48:39 -- host/auth.sh@44 -- # keyid=0 00:19:36.976 13:48:39 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:36.976 13:48:39 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:36.976 13:48:39 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:36.976 13:48:39 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:36.976 13:48:39 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:19:36.976 13:48:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:36.976 13:48:39 -- host/auth.sh@68 -- # digest=sha512 00:19:36.976 13:48:39 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:36.976 13:48:39 -- host/auth.sh@68 -- # keyid=0 00:19:36.976 13:48:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:36.976 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.976 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.976 13:48:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.976 13:48:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:36.976 13:48:39 -- nvmf/common.sh@717 -- # local ip 00:19:36.976 13:48:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:36.976 13:48:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:36.976 13:48:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.976 13:48:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.976 13:48:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:36.976 13:48:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.976 13:48:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:36.976 13:48:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:36.976 13:48:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:36.976 13:48:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:36.976 13:48:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.976 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:19:37.235 nvme0n1 00:19:37.235 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.235 13:48:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.492 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.492 13:48:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:37.492 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.492 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.492 13:48:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.492 13:48:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.492 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.492 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.492 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.492 13:48:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:37.492 13:48:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:37.492 13:48:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:37.492 13:48:40 -- host/auth.sh@44 -- # digest=sha512 00:19:37.492 13:48:40 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:37.492 13:48:40 -- host/auth.sh@44 -- # keyid=1 00:19:37.492 13:48:40 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:37.492 13:48:40 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:37.492 13:48:40 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:37.492 13:48:40 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:37.492 13:48:40 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:19:37.492 13:48:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:37.492 13:48:40 -- host/auth.sh@68 -- # digest=sha512 00:19:37.492 13:48:40 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:37.492 13:48:40 -- host/auth.sh@68 -- # keyid=1 00:19:37.492 13:48:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:37.492 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.492 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.492 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.492 13:48:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:37.492 13:48:40 -- nvmf/common.sh@717 -- # local ip 00:19:37.492 13:48:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:37.492 13:48:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:37.492 13:48:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.493 13:48:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.493 13:48:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:37.493 13:48:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.493 13:48:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:37.493 13:48:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:37.493 13:48:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:37.493 13:48:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:37.493 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.493 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.750 nvme0n1 00:19:37.750 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.750 13:48:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.750 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.750 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.750 13:48:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:37.750 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.750 13:48:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.750 13:48:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.750 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.750 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.750 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.750 13:48:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:37.750 13:48:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:37.750 13:48:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:37.750 13:48:40 -- host/auth.sh@44 -- # digest=sha512 00:19:37.750 13:48:40 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:37.750 13:48:40 -- host/auth.sh@44 -- # keyid=2 00:19:37.750 13:48:40 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:37.750 13:48:40 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:37.750 13:48:40 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:37.750 13:48:40 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:37.750 13:48:40 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:19:37.750 13:48:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:37.750 13:48:40 -- host/auth.sh@68 -- # digest=sha512 00:19:37.750 13:48:40 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:37.750 13:48:40 -- host/auth.sh@68 -- # keyid=2 00:19:37.750 13:48:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:37.750 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.750 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.750 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.750 13:48:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:37.750 13:48:40 -- nvmf/common.sh@717 -- # local ip 00:19:37.750 13:48:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:37.750 13:48:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:37.750 13:48:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.750 13:48:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.750 13:48:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:37.750 13:48:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.750 13:48:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:37.750 13:48:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:37.750 13:48:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:37.750 13:48:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:37.750 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.750 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:38.008 nvme0n1 00:19:38.008 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.008 13:48:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.008 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.008 13:48:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:38.008 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:38.008 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.008 13:48:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.008 13:48:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.008 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.008 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:38.008 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.008 13:48:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:38.008 13:48:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:38.008 13:48:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:38.008 13:48:40 -- host/auth.sh@44 -- # digest=sha512 00:19:38.008 13:48:40 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:38.008 13:48:40 -- host/auth.sh@44 -- # keyid=3 00:19:38.008 13:48:40 -- host/auth.sh@45 -- # key=DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:38.008 13:48:40 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:38.008 13:48:40 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:38.008 13:48:40 -- host/auth.sh@49 -- # echo DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:38.008 13:48:40 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:19:38.008 13:48:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:38.008 13:48:40 -- host/auth.sh@68 -- # digest=sha512 00:19:38.008 13:48:40 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:38.008 13:48:40 -- host/auth.sh@68 -- # keyid=3 00:19:38.008 13:48:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:38.008 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.008 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:38.008 13:48:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.008 13:48:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:38.008 13:48:40 -- nvmf/common.sh@717 -- # local ip 00:19:38.008 13:48:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:38.008 13:48:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:38.008 13:48:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.008 13:48:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.008 13:48:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:38.008 13:48:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.008 13:48:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:38.008 13:48:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:38.008 13:48:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:38.008 13:48:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:38.008 13:48:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.008 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:19:38.266 nvme0n1 00:19:38.266 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.266 13:48:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.266 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.266 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.266 13:48:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:38.266 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.266 13:48:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.266 13:48:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.266 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.266 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.525 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.525 13:48:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:38.525 13:48:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:38.525 13:48:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:38.525 13:48:41 -- host/auth.sh@44 -- # digest=sha512 00:19:38.525 13:48:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:38.525 13:48:41 -- host/auth.sh@44 -- # keyid=4 00:19:38.525 13:48:41 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:38.525 13:48:41 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:38.525 13:48:41 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:38.525 13:48:41 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:38.525 13:48:41 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:19:38.525 13:48:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:38.525 13:48:41 -- host/auth.sh@68 -- # digest=sha512 00:19:38.525 13:48:41 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:38.525 13:48:41 -- host/auth.sh@68 -- # keyid=4 00:19:38.525 13:48:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:38.525 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.525 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.525 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.525 13:48:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:38.525 13:48:41 -- nvmf/common.sh@717 -- # local ip 00:19:38.525 13:48:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:38.525 13:48:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:38.525 13:48:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.525 13:48:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.525 13:48:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:38.525 13:48:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.525 13:48:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:38.525 13:48:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:38.525 13:48:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:38.525 13:48:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:38.525 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.525 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.784 nvme0n1 00:19:38.784 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.784 13:48:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.784 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.784 13:48:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:38.784 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.784 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.784 13:48:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.784 13:48:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.784 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.784 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.784 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.784 13:48:41 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.784 13:48:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:38.784 13:48:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:38.784 13:48:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:38.784 13:48:41 -- host/auth.sh@44 -- # digest=sha512 00:19:38.784 13:48:41 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:38.784 13:48:41 -- host/auth.sh@44 -- # keyid=0 00:19:38.784 13:48:41 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:38.784 13:48:41 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:38.784 13:48:41 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:38.784 13:48:41 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:38.784 13:48:41 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:19:38.784 13:48:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:38.784 13:48:41 -- host/auth.sh@68 -- # digest=sha512 00:19:38.784 13:48:41 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:38.784 13:48:41 -- host/auth.sh@68 -- # keyid=0 00:19:38.784 13:48:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:38.784 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.784 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.784 13:48:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.784 13:48:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:38.784 13:48:41 -- nvmf/common.sh@717 -- # local ip 00:19:38.785 13:48:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:38.785 13:48:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:38.785 13:48:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.785 13:48:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.785 13:48:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:38.785 13:48:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.785 13:48:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:38.785 13:48:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:38.785 13:48:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:38.785 13:48:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:38.785 13:48:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.785 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:19:39.351 nvme0n1 00:19:39.351 13:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.351 13:48:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.351 13:48:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:39.351 13:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.351 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:19:39.351 13:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.351 13:48:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.351 13:48:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.351 13:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.351 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:19:39.351 13:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.351 13:48:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:39.351 13:48:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:39.351 13:48:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:39.351 13:48:42 -- host/auth.sh@44 -- # digest=sha512 00:19:39.351 13:48:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:39.352 13:48:42 -- host/auth.sh@44 -- # keyid=1 00:19:39.352 13:48:42 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:39.352 13:48:42 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:39.352 13:48:42 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:39.352 13:48:42 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:39.352 13:48:42 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:19:39.352 13:48:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:39.352 13:48:42 -- host/auth.sh@68 -- # digest=sha512 00:19:39.352 13:48:42 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:39.352 13:48:42 -- host/auth.sh@68 -- # keyid=1 00:19:39.352 13:48:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:39.352 13:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.352 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:19:39.352 13:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.352 13:48:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:39.352 13:48:42 -- nvmf/common.sh@717 -- # local ip 00:19:39.352 13:48:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:39.352 13:48:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:39.352 13:48:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.352 13:48:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.352 13:48:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:39.352 13:48:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.352 13:48:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:39.352 13:48:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:39.352 13:48:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:39.352 13:48:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:39.352 13:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.352 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:19:40.287 nvme0n1 00:19:40.287 13:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.287 13:48:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.287 13:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.287 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:19:40.287 13:48:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:40.287 13:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.287 13:48:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.287 13:48:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.287 13:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.287 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:19:40.287 13:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.287 13:48:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:40.287 13:48:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:40.287 13:48:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:40.287 13:48:42 -- host/auth.sh@44 -- # digest=sha512 00:19:40.287 13:48:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:40.287 13:48:42 -- host/auth.sh@44 -- # keyid=2 00:19:40.287 13:48:42 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:40.287 13:48:42 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:40.287 13:48:42 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:40.287 13:48:42 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:40.287 13:48:42 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:19:40.287 13:48:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:40.287 13:48:42 -- host/auth.sh@68 -- # digest=sha512 00:19:40.287 13:48:42 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:40.287 13:48:42 -- host/auth.sh@68 -- # keyid=2 00:19:40.287 13:48:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.287 13:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.287 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:19:40.287 13:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.287 13:48:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:40.287 13:48:42 -- nvmf/common.sh@717 -- # local ip 00:19:40.287 13:48:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:40.287 13:48:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:40.287 13:48:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.287 13:48:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.287 13:48:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:40.287 13:48:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.287 13:48:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:40.287 13:48:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:40.287 13:48:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:40.287 13:48:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:40.287 13:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.287 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:19:40.854 nvme0n1 00:19:40.854 13:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.854 13:48:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.854 13:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.854 13:48:43 -- common/autotest_common.sh@10 -- # set +x 00:19:40.854 13:48:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:40.854 13:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.854 13:48:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.854 13:48:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.854 13:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.854 13:48:43 -- common/autotest_common.sh@10 -- # set +x 00:19:40.854 13:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.854 13:48:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:40.854 13:48:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:40.854 13:48:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:40.854 13:48:43 -- host/auth.sh@44 -- # digest=sha512 00:19:40.854 13:48:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:40.854 13:48:43 -- host/auth.sh@44 -- # keyid=3 00:19:40.854 13:48:43 -- host/auth.sh@45 -- # key=DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:40.854 13:48:43 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:40.854 13:48:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:40.854 13:48:43 -- host/auth.sh@49 -- # echo DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:40.854 13:48:43 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:19:40.854 13:48:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:40.854 13:48:43 -- host/auth.sh@68 -- # digest=sha512 00:19:40.854 13:48:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:40.854 13:48:43 -- host/auth.sh@68 -- # keyid=3 00:19:40.854 13:48:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.854 13:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.854 13:48:43 -- common/autotest_common.sh@10 -- # set +x 00:19:40.854 13:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.854 13:48:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:40.854 13:48:43 -- nvmf/common.sh@717 -- # local ip 00:19:40.854 13:48:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:40.854 13:48:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:40.854 13:48:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.854 13:48:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.854 13:48:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:40.854 13:48:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.854 13:48:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:40.854 13:48:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:40.854 13:48:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:40.854 13:48:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:40.854 13:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.854 13:48:43 -- common/autotest_common.sh@10 -- # set +x 00:19:41.455 nvme0n1 00:19:41.455 13:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.455 13:48:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.455 13:48:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:41.455 13:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.455 13:48:43 -- common/autotest_common.sh@10 -- # set +x 00:19:41.455 13:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.455 13:48:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.455 13:48:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.455 13:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.455 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:19:41.455 13:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.455 13:48:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:41.455 13:48:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:41.455 13:48:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:41.455 13:48:44 -- host/auth.sh@44 -- # digest=sha512 00:19:41.455 13:48:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:41.455 13:48:44 -- host/auth.sh@44 -- # keyid=4 00:19:41.455 13:48:44 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:41.455 13:48:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:41.455 13:48:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:41.455 13:48:44 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:41.455 13:48:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:19:41.455 13:48:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:41.455 13:48:44 -- host/auth.sh@68 -- # digest=sha512 00:19:41.455 13:48:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:41.455 13:48:44 -- host/auth.sh@68 -- # keyid=4 00:19:41.455 13:48:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:41.455 13:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.455 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:19:41.455 13:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.455 13:48:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:41.455 13:48:44 -- nvmf/common.sh@717 -- # local ip 00:19:41.455 13:48:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:41.455 13:48:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:41.455 13:48:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.455 13:48:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.455 13:48:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:41.455 13:48:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.455 13:48:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:41.455 13:48:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:41.455 13:48:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:41.455 13:48:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:41.455 13:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.455 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:19:42.024 nvme0n1 00:19:42.024 13:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.024 13:48:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.024 13:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.024 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:19:42.024 13:48:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:42.024 13:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.024 13:48:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.024 13:48:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.024 13:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.024 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:19:42.024 13:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.024 13:48:44 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.024 13:48:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:42.024 13:48:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:42.024 13:48:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:42.024 13:48:44 -- host/auth.sh@44 -- # digest=sha512 00:19:42.024 13:48:44 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:42.024 13:48:44 -- host/auth.sh@44 -- # keyid=0 00:19:42.024 13:48:44 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:42.024 13:48:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:42.024 13:48:44 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:42.024 13:48:44 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGUxNWFlNGIxMTE2M2RjY2NhMDk0NmMxOTRlMmZkZDLaiP27: 00:19:42.024 13:48:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:19:42.024 13:48:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:42.024 13:48:44 -- host/auth.sh@68 -- # digest=sha512 00:19:42.024 13:48:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:42.024 13:48:44 -- host/auth.sh@68 -- # keyid=0 00:19:42.024 13:48:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.024 13:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.024 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:19:42.024 13:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.024 13:48:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:42.024 13:48:44 -- nvmf/common.sh@717 -- # local ip 00:19:42.024 13:48:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:42.024 13:48:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:42.024 13:48:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.024 13:48:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.024 13:48:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:42.024 13:48:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.024 13:48:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:42.024 13:48:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:42.024 13:48:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:42.024 13:48:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:42.024 13:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.024 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:19:42.960 nvme0n1 00:19:42.960 13:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.960 13:48:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.960 13:48:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:42.960 13:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.960 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:19:42.960 13:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.960 13:48:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.960 13:48:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.960 13:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.960 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:19:42.960 13:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.960 13:48:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:42.960 13:48:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:42.960 13:48:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:42.960 13:48:45 -- host/auth.sh@44 -- # digest=sha512 00:19:42.960 13:48:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:42.960 13:48:45 -- host/auth.sh@44 -- # keyid=1 00:19:42.960 13:48:45 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:42.960 13:48:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:42.960 13:48:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:42.960 13:48:45 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:42.960 13:48:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:19:42.960 13:48:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:42.960 13:48:45 -- host/auth.sh@68 -- # digest=sha512 00:19:42.960 13:48:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:42.960 13:48:45 -- host/auth.sh@68 -- # keyid=1 00:19:42.960 13:48:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.960 13:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.960 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:19:42.960 13:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.960 13:48:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:42.960 13:48:45 -- nvmf/common.sh@717 -- # local ip 00:19:42.960 13:48:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:42.960 13:48:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:42.960 13:48:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.960 13:48:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.960 13:48:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:42.960 13:48:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.960 13:48:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:42.960 13:48:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:42.960 13:48:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:42.960 13:48:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:42.960 13:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.960 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:19:44.335 nvme0n1 00:19:44.335 13:48:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.335 13:48:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.335 13:48:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.335 13:48:46 -- common/autotest_common.sh@10 -- # set +x 00:19:44.335 13:48:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:44.335 13:48:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.335 13:48:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.335 13:48:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.335 13:48:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.335 13:48:46 -- common/autotest_common.sh@10 -- # set +x 00:19:44.335 13:48:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.335 13:48:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:44.335 13:48:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:44.335 13:48:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:44.335 13:48:46 -- host/auth.sh@44 -- # digest=sha512 00:19:44.335 13:48:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:44.335 13:48:46 -- host/auth.sh@44 -- # keyid=2 00:19:44.335 13:48:46 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:44.335 13:48:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:44.335 13:48:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:44.335 13:48:46 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2FkZWZmYjg2MGIyNjUwOGI3ZmVmNjcyN2JiYzNlYjfJNYCU: 00:19:44.335 13:48:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:19:44.335 13:48:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:44.335 13:48:46 -- host/auth.sh@68 -- # digest=sha512 00:19:44.335 13:48:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:44.335 13:48:46 -- host/auth.sh@68 -- # keyid=2 00:19:44.335 13:48:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:44.335 13:48:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.335 13:48:46 -- common/autotest_common.sh@10 -- # set +x 00:19:44.335 13:48:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.335 13:48:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:44.335 13:48:46 -- nvmf/common.sh@717 -- # local ip 00:19:44.335 13:48:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:44.335 13:48:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:44.335 13:48:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.335 13:48:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.335 13:48:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:44.335 13:48:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.335 13:48:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:44.335 13:48:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:44.335 13:48:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:44.335 13:48:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:44.335 13:48:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.335 13:48:46 -- common/autotest_common.sh@10 -- # set +x 00:19:45.272 nvme0n1 00:19:45.272 13:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.272 13:48:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.272 13:48:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:45.272 13:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.272 13:48:47 -- common/autotest_common.sh@10 -- # set +x 00:19:45.272 13:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.272 13:48:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.272 13:48:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.272 13:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.272 13:48:47 -- common/autotest_common.sh@10 -- # set +x 00:19:45.273 13:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.273 13:48:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:45.273 13:48:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:45.273 13:48:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:45.273 13:48:47 -- host/auth.sh@44 -- # digest=sha512 00:19:45.273 13:48:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:45.273 13:48:47 -- host/auth.sh@44 -- # keyid=3 00:19:45.273 13:48:47 -- host/auth.sh@45 -- # key=DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:45.273 13:48:47 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:45.273 13:48:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:45.273 13:48:47 -- host/auth.sh@49 -- # echo DHHC-1:02:YzhlYTU4YTE3MzJmMTZmN2RjMjBkOTRiOTQzMDQzZDBlOGMxMzcyOWI1YWIyYjlhg/qX/Q==: 00:19:45.273 13:48:47 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:19:45.273 13:48:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:45.273 13:48:47 -- host/auth.sh@68 -- # digest=sha512 00:19:45.273 13:48:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:45.273 13:48:47 -- host/auth.sh@68 -- # keyid=3 00:19:45.273 13:48:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:45.273 13:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.273 13:48:47 -- common/autotest_common.sh@10 -- # set +x 00:19:45.273 13:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.273 13:48:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:45.273 13:48:47 -- nvmf/common.sh@717 -- # local ip 00:19:45.273 13:48:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:45.273 13:48:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:45.273 13:48:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.273 13:48:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.273 13:48:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:45.273 13:48:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.273 13:48:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:45.273 13:48:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:45.273 13:48:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:45.273 13:48:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:45.273 13:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.273 13:48:47 -- common/autotest_common.sh@10 -- # set +x 00:19:46.209 nvme0n1 00:19:46.209 13:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.209 13:48:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.209 13:48:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:46.209 13:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.209 13:48:48 -- common/autotest_common.sh@10 -- # set +x 00:19:46.209 13:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.209 13:48:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.209 13:48:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.209 13:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.209 13:48:48 -- common/autotest_common.sh@10 -- # set +x 00:19:46.209 13:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.209 13:48:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:46.209 13:48:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:46.209 13:48:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:46.209 13:48:48 -- host/auth.sh@44 -- # digest=sha512 00:19:46.209 13:48:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:46.209 13:48:48 -- host/auth.sh@44 -- # keyid=4 00:19:46.209 13:48:48 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:46.209 13:48:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:46.209 13:48:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:46.209 13:48:48 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDMyNmE0NDg5NDhkZDAyMGM1NWFlMWFkYTI5MTRjMzk1YzNjYmQ2MzEwNWIzNDk1MjNlMDExY2RiYWExN2U1YysHgYY=: 00:19:46.209 13:48:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:19:46.209 13:48:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:46.209 13:48:48 -- host/auth.sh@68 -- # digest=sha512 00:19:46.209 13:48:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:46.209 13:48:48 -- host/auth.sh@68 -- # keyid=4 00:19:46.209 13:48:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:46.209 13:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.209 13:48:48 -- common/autotest_common.sh@10 -- # set +x 00:19:46.209 13:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.209 13:48:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:46.209 13:48:48 -- nvmf/common.sh@717 -- # local ip 00:19:46.209 13:48:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:46.209 13:48:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:46.209 13:48:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.209 13:48:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.209 13:48:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:46.209 13:48:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.209 13:48:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:46.209 13:48:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:46.209 13:48:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:46.209 13:48:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:46.209 13:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.209 13:48:48 -- common/autotest_common.sh@10 -- # set +x 00:19:47.587 nvme0n1 00:19:47.587 13:48:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.587 13:48:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.587 13:48:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:47.587 13:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.587 13:48:49 -- common/autotest_common.sh@10 -- # set +x 00:19:47.587 13:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.587 13:48:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.587 13:48:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.587 13:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.587 13:48:50 -- common/autotest_common.sh@10 -- # set +x 00:19:47.587 13:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.587 13:48:50 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:47.587 13:48:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:47.587 13:48:50 -- host/auth.sh@44 -- # digest=sha256 00:19:47.587 13:48:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:47.587 13:48:50 -- host/auth.sh@44 -- # keyid=1 00:19:47.587 13:48:50 -- host/auth.sh@45 -- # key=DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:47.587 13:48:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:47.587 13:48:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:47.587 13:48:50 -- host/auth.sh@49 -- # echo DHHC-1:00:NGRlYjBkNDk4ODdhMDFlMTY4MjJjMzA2NzVlZTUzMGEzZGUyMzM0YTZkMGUyYjI1ARF07Q==: 00:19:47.587 13:48:50 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.587 13:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.587 13:48:50 -- common/autotest_common.sh@10 -- # set +x 00:19:47.587 13:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.587 13:48:50 -- host/auth.sh@119 -- # get_main_ns_ip 00:19:47.587 13:48:50 -- nvmf/common.sh@717 -- # local ip 00:19:47.587 13:48:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:47.587 13:48:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:47.587 13:48:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.587 13:48:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.587 13:48:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:47.587 13:48:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.587 13:48:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:47.587 13:48:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:47.587 13:48:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:47.587 13:48:50 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:47.587 13:48:50 -- common/autotest_common.sh@638 -- # local es=0 00:19:47.587 13:48:50 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:47.587 13:48:50 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:47.587 13:48:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.587 13:48:50 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:47.587 13:48:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.587 13:48:50 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:47.587 13:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.587 13:48:50 -- common/autotest_common.sh@10 -- # set +x 00:19:47.587 request: 00:19:47.587 { 00:19:47.587 "name": "nvme0", 00:19:47.587 "trtype": "tcp", 00:19:47.587 "traddr": "10.0.0.1", 00:19:47.587 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:47.587 "adrfam": "ipv4", 00:19:47.587 "trsvcid": "4420", 00:19:47.587 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:47.587 "method": "bdev_nvme_attach_controller", 00:19:47.587 "req_id": 1 00:19:47.587 } 00:19:47.587 Got JSON-RPC error response 00:19:47.587 response: 00:19:47.587 { 00:19:47.587 "code": -32602, 00:19:47.587 "message": "Invalid parameters" 00:19:47.587 } 00:19:47.587 13:48:50 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:47.587 13:48:50 -- common/autotest_common.sh@641 -- # es=1 00:19:47.587 13:48:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:47.587 13:48:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:47.587 13:48:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:47.587 13:48:50 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.587 13:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.587 13:48:50 -- common/autotest_common.sh@10 -- # set +x 00:19:47.587 13:48:50 -- host/auth.sh@121 -- # jq length 00:19:47.587 13:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.587 13:48:50 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:19:47.587 13:48:50 -- host/auth.sh@124 -- # get_main_ns_ip 00:19:47.587 13:48:50 -- nvmf/common.sh@717 -- # local ip 00:19:47.587 13:48:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:47.587 13:48:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:47.587 13:48:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.587 13:48:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.587 13:48:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:47.587 13:48:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.587 13:48:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:47.587 13:48:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:47.587 13:48:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:47.587 13:48:50 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:47.587 13:48:50 -- common/autotest_common.sh@638 -- # local es=0 00:19:47.587 13:48:50 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:47.587 13:48:50 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:47.587 13:48:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.587 13:48:50 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:47.587 13:48:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.587 13:48:50 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:47.587 13:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.587 13:48:50 -- common/autotest_common.sh@10 -- # set +x 00:19:47.587 request: 00:19:47.587 { 00:19:47.587 "name": "nvme0", 00:19:47.587 "trtype": "tcp", 00:19:47.587 "traddr": "10.0.0.1", 00:19:47.587 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:47.587 "adrfam": "ipv4", 00:19:47.587 "trsvcid": "4420", 00:19:47.587 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:47.587 "dhchap_key": "key2", 00:19:47.587 "method": "bdev_nvme_attach_controller", 00:19:47.587 "req_id": 1 00:19:47.587 } 00:19:47.587 Got JSON-RPC error response 00:19:47.587 response: 00:19:47.587 { 00:19:47.587 "code": -32602, 00:19:47.587 "message": "Invalid parameters" 00:19:47.587 } 00:19:47.587 13:48:50 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:47.587 13:48:50 -- common/autotest_common.sh@641 -- # es=1 00:19:47.587 13:48:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:47.587 13:48:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:47.587 13:48:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:47.587 13:48:50 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.587 13:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.587 13:48:50 -- common/autotest_common.sh@10 -- # set +x 00:19:47.587 13:48:50 -- host/auth.sh@127 -- # jq length 00:19:47.587 13:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.587 13:48:50 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:19:47.587 13:48:50 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:19:47.587 13:48:50 -- host/auth.sh@130 -- # cleanup 00:19:47.587 13:48:50 -- host/auth.sh@24 -- # nvmftestfini 00:19:47.587 13:48:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:47.587 13:48:50 -- nvmf/common.sh@117 -- # sync 00:19:47.587 13:48:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:47.587 13:48:50 -- nvmf/common.sh@120 -- # set +e 00:19:47.587 13:48:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:47.588 13:48:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:47.588 rmmod nvme_tcp 00:19:47.588 rmmod nvme_fabrics 00:19:47.588 13:48:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:47.588 13:48:50 -- nvmf/common.sh@124 -- # set -e 00:19:47.588 13:48:50 -- nvmf/common.sh@125 -- # return 0 00:19:47.588 13:48:50 -- nvmf/common.sh@478 -- # '[' -n 2663638 ']' 00:19:47.588 13:48:50 -- nvmf/common.sh@479 -- # killprocess 2663638 00:19:47.588 13:48:50 -- common/autotest_common.sh@936 -- # '[' -z 2663638 ']' 00:19:47.588 13:48:50 -- common/autotest_common.sh@940 -- # kill -0 2663638 00:19:47.588 13:48:50 -- common/autotest_common.sh@941 -- # uname 00:19:47.588 13:48:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:47.588 13:48:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2663638 00:19:47.588 13:48:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:47.588 13:48:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:47.588 13:48:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2663638' 00:19:47.588 killing process with pid 2663638 00:19:47.588 13:48:50 -- common/autotest_common.sh@955 -- # kill 2663638 00:19:47.588 13:48:50 -- common/autotest_common.sh@960 -- # wait 2663638 00:19:47.847 13:48:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:47.847 13:48:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:47.847 13:48:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:47.847 13:48:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.847 13:48:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.847 13:48:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.847 13:48:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.847 13:48:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.379 13:48:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:50.379 13:48:52 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:50.379 13:48:52 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:50.379 13:48:52 -- host/auth.sh@27 -- # clean_kernel_target 00:19:50.379 13:48:52 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:50.379 13:48:52 -- nvmf/common.sh@675 -- # echo 0 00:19:50.379 13:48:52 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:50.379 13:48:52 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:50.379 13:48:52 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:50.379 13:48:52 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:50.379 13:48:52 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:19:50.379 13:48:52 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:19:50.379 13:48:52 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:51.313 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:51.313 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:51.313 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:51.313 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:51.313 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:51.313 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:51.313 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:51.313 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:51.313 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:51.313 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:51.313 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:51.313 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:51.313 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:51.313 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:51.313 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:51.313 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:52.249 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:19:52.249 13:48:54 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.tjU /tmp/spdk.key-null.kP2 /tmp/spdk.key-sha256.o52 /tmp/spdk.key-sha384.EmS /tmp/spdk.key-sha512.Fkd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:19:52.249 13:48:54 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:53.625 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:19:53.625 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:19:53.625 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:19:53.625 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:19:53.625 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:19:53.625 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:19:53.625 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:19:53.625 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:19:53.625 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:19:53.625 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:19:53.625 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:19:53.625 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:19:53.625 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:19:53.625 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:19:53.625 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:19:53.625 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:19:53.625 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:19:53.625 00:19:53.625 real 0m49.784s 00:19:53.625 user 0m47.637s 00:19:53.625 sys 0m5.529s 00:19:53.625 13:48:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:53.625 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:19:53.625 ************************************ 00:19:53.625 END TEST nvmf_auth 00:19:53.625 ************************************ 00:19:53.625 13:48:56 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:19:53.625 13:48:56 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:53.625 13:48:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:53.625 13:48:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:53.625 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:19:53.625 ************************************ 00:19:53.625 START TEST nvmf_digest 00:19:53.625 ************************************ 00:19:53.625 13:48:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:53.625 * Looking for test storage... 00:19:53.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:53.625 13:48:56 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.625 13:48:56 -- nvmf/common.sh@7 -- # uname -s 00:19:53.625 13:48:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.625 13:48:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.625 13:48:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.625 13:48:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.625 13:48:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.625 13:48:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.625 13:48:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.625 13:48:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.625 13:48:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.625 13:48:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.625 13:48:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:53.625 13:48:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:53.625 13:48:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.625 13:48:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.625 13:48:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:53.625 13:48:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.625 13:48:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:53.625 13:48:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.625 13:48:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.625 13:48:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.625 13:48:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.625 13:48:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.626 13:48:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.626 13:48:56 -- paths/export.sh@5 -- # export PATH 00:19:53.626 13:48:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.626 13:48:56 -- nvmf/common.sh@47 -- # : 0 00:19:53.626 13:48:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:53.626 13:48:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:53.626 13:48:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.626 13:48:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.626 13:48:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.626 13:48:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:53.626 13:48:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:53.626 13:48:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:53.626 13:48:56 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:53.626 13:48:56 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:53.626 13:48:56 -- host/digest.sh@16 -- # runtime=2 00:19:53.626 13:48:56 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:53.626 13:48:56 -- host/digest.sh@138 -- # nvmftestinit 00:19:53.626 13:48:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:53.626 13:48:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.626 13:48:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:53.626 13:48:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:53.626 13:48:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:53.626 13:48:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.626 13:48:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.626 13:48:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.626 13:48:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:53.626 13:48:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:53.626 13:48:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:53.626 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:19:55.531 13:48:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:55.531 13:48:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:55.531 13:48:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:55.531 13:48:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:55.531 13:48:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:55.531 13:48:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:55.531 13:48:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:55.531 13:48:58 -- nvmf/common.sh@295 -- # net_devs=() 00:19:55.531 13:48:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:55.531 13:48:58 -- nvmf/common.sh@296 -- # e810=() 00:19:55.531 13:48:58 -- nvmf/common.sh@296 -- # local -ga e810 00:19:55.531 13:48:58 -- nvmf/common.sh@297 -- # x722=() 00:19:55.531 13:48:58 -- nvmf/common.sh@297 -- # local -ga x722 00:19:55.531 13:48:58 -- nvmf/common.sh@298 -- # mlx=() 00:19:55.531 13:48:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:55.531 13:48:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.531 13:48:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.531 13:48:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.531 13:48:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.531 13:48:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.531 13:48:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.531 13:48:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.531 13:48:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.531 13:48:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.531 13:48:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.531 13:48:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.531 13:48:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:55.531 13:48:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:55.531 13:48:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:55.531 13:48:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.531 13:48:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:55.531 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:55.531 13:48:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.531 13:48:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:55.531 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:55.531 13:48:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:55.531 13:48:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.531 13:48:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.531 13:48:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:55.531 13:48:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.531 13:48:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:55.531 Found net devices under 0000:84:00.0: cvl_0_0 00:19:55.531 13:48:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.531 13:48:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.531 13:48:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.531 13:48:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:55.531 13:48:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.531 13:48:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:55.531 Found net devices under 0000:84:00.1: cvl_0_1 00:19:55.531 13:48:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.531 13:48:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:55.531 13:48:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:55.531 13:48:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:55.531 13:48:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:55.531 13:48:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.531 13:48:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.531 13:48:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.531 13:48:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:55.532 13:48:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:55.532 13:48:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:55.532 13:48:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:55.532 13:48:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:55.532 13:48:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.532 13:48:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:55.532 13:48:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:55.532 13:48:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:55.532 13:48:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.532 13:48:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.532 13:48:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.532 13:48:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:55.532 13:48:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:55.792 13:48:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:55.792 13:48:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:55.792 13:48:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:55.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:19:55.792 00:19:55.792 --- 10.0.0.2 ping statistics --- 00:19:55.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.792 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:19:55.792 13:48:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:55.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:19:55.792 00:19:55.792 --- 10.0.0.1 ping statistics --- 00:19:55.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.792 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:55.792 13:48:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.792 13:48:58 -- nvmf/common.sh@411 -- # return 0 00:19:55.792 13:48:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:55.792 13:48:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.792 13:48:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:55.792 13:48:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:55.792 13:48:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.792 13:48:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:55.792 13:48:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:55.792 13:48:58 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:55.792 13:48:58 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:55.792 13:48:58 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:55.793 13:48:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:55.793 13:48:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:55.793 13:48:58 -- common/autotest_common.sh@10 -- # set +x 00:19:55.793 ************************************ 00:19:55.793 START TEST nvmf_digest_clean 00:19:55.793 ************************************ 00:19:55.793 13:48:58 -- common/autotest_common.sh@1111 -- # run_digest 00:19:55.793 13:48:58 -- host/digest.sh@120 -- # local dsa_initiator 00:19:55.793 13:48:58 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:55.793 13:48:58 -- host/digest.sh@121 -- # dsa_initiator=false 00:19:55.793 13:48:58 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:55.793 13:48:58 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:55.793 13:48:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:55.793 13:48:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:55.793 13:48:58 -- common/autotest_common.sh@10 -- # set +x 00:19:55.793 13:48:58 -- nvmf/common.sh@470 -- # nvmfpid=2673253 00:19:55.793 13:48:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:55.793 13:48:58 -- nvmf/common.sh@471 -- # waitforlisten 2673253 00:19:55.793 13:48:58 -- common/autotest_common.sh@817 -- # '[' -z 2673253 ']' 00:19:55.793 13:48:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.793 13:48:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:55.793 13:48:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.793 13:48:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:55.793 13:48:58 -- common/autotest_common.sh@10 -- # set +x 00:19:55.793 [2024-04-18 13:48:58.499629] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:19:55.793 [2024-04-18 13:48:58.499711] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.793 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.793 [2024-04-18 13:48:58.569675] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.053 [2024-04-18 13:48:58.693652] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.053 [2024-04-18 13:48:58.693727] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.053 [2024-04-18 13:48:58.693743] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.053 [2024-04-18 13:48:58.693757] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.053 [2024-04-18 13:48:58.693769] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.053 [2024-04-18 13:48:58.693803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.053 13:48:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:56.053 13:48:58 -- common/autotest_common.sh@850 -- # return 0 00:19:56.053 13:48:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:56.053 13:48:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:56.053 13:48:58 -- common/autotest_common.sh@10 -- # set +x 00:19:56.053 13:48:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.053 13:48:58 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:56.053 13:48:58 -- host/digest.sh@126 -- # common_target_config 00:19:56.053 13:48:58 -- host/digest.sh@43 -- # rpc_cmd 00:19:56.053 13:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.053 13:48:58 -- common/autotest_common.sh@10 -- # set +x 00:19:56.312 null0 00:19:56.312 [2024-04-18 13:48:58.897069] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.312 [2024-04-18 13:48:58.921313] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.312 13:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.312 13:48:58 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:56.312 13:48:58 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:56.312 13:48:58 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:56.312 13:48:58 -- host/digest.sh@80 -- # rw=randread 00:19:56.312 13:48:58 -- host/digest.sh@80 -- # bs=4096 00:19:56.312 13:48:58 -- host/digest.sh@80 -- # qd=128 00:19:56.312 13:48:58 -- host/digest.sh@80 -- # scan_dsa=false 00:19:56.312 13:48:58 -- host/digest.sh@83 -- # bperfpid=2673278 00:19:56.312 13:48:58 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:56.312 13:48:58 -- host/digest.sh@84 -- # waitforlisten 2673278 /var/tmp/bperf.sock 00:19:56.312 13:48:58 -- common/autotest_common.sh@817 -- # '[' -z 2673278 ']' 00:19:56.312 13:48:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:56.312 13:48:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:56.312 13:48:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:56.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:56.312 13:48:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:56.312 13:48:58 -- common/autotest_common.sh@10 -- # set +x 00:19:56.312 [2024-04-18 13:48:58.970769] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:19:56.312 [2024-04-18 13:48:58.970857] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673278 ] 00:19:56.312 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.312 [2024-04-18 13:48:59.041850] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.569 [2024-04-18 13:48:59.160534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.570 13:48:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:56.570 13:48:59 -- common/autotest_common.sh@850 -- # return 0 00:19:56.570 13:48:59 -- host/digest.sh@86 -- # false 00:19:56.570 13:48:59 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:56.570 13:48:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:56.827 13:48:59 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:56.827 13:48:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:57.415 nvme0n1 00:19:57.415 13:48:59 -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:57.415 13:48:59 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:57.415 Running I/O for 2 seconds... 00:19:59.321 00:19:59.321 Latency(us) 00:19:59.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.321 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:59.321 nvme0n1 : 2.00 19991.50 78.09 0.00 0.00 6396.01 3179.71 16019.91 00:19:59.321 =================================================================================================================== 00:19:59.321 Total : 19991.50 78.09 0.00 0.00 6396.01 3179.71 16019.91 00:19:59.321 0 00:19:59.321 13:49:02 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:59.321 13:49:02 -- host/digest.sh@93 -- # get_accel_stats 00:19:59.321 13:49:02 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:59.321 13:49:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:59.321 13:49:02 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:59.321 | select(.opcode=="crc32c") 00:19:59.321 | "\(.module_name) \(.executed)"' 00:19:59.578 13:49:02 -- host/digest.sh@94 -- # false 00:19:59.578 13:49:02 -- host/digest.sh@94 -- # exp_module=software 00:19:59.578 13:49:02 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:59.578 13:49:02 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:59.578 13:49:02 -- host/digest.sh@98 -- # killprocess 2673278 00:19:59.578 13:49:02 -- common/autotest_common.sh@936 -- # '[' -z 2673278 ']' 00:19:59.578 13:49:02 -- common/autotest_common.sh@940 -- # kill -0 2673278 00:19:59.578 13:49:02 -- common/autotest_common.sh@941 -- # uname 00:19:59.578 13:49:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:59.578 13:49:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2673278 00:19:59.837 13:49:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:59.837 13:49:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:59.837 13:49:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2673278' 00:19:59.837 killing process with pid 2673278 00:19:59.837 13:49:02 -- common/autotest_common.sh@955 -- # kill 2673278 00:19:59.837 Received shutdown signal, test time was about 2.000000 seconds 00:19:59.837 00:19:59.837 Latency(us) 00:19:59.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.837 =================================================================================================================== 00:19:59.837 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.837 13:49:02 -- common/autotest_common.sh@960 -- # wait 2673278 00:20:00.097 13:49:02 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:00.097 13:49:02 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:00.097 13:49:02 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:00.097 13:49:02 -- host/digest.sh@80 -- # rw=randread 00:20:00.097 13:49:02 -- host/digest.sh@80 -- # bs=131072 00:20:00.097 13:49:02 -- host/digest.sh@80 -- # qd=16 00:20:00.097 13:49:02 -- host/digest.sh@80 -- # scan_dsa=false 00:20:00.097 13:49:02 -- host/digest.sh@83 -- # bperfpid=2673690 00:20:00.097 13:49:02 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:00.097 13:49:02 -- host/digest.sh@84 -- # waitforlisten 2673690 /var/tmp/bperf.sock 00:20:00.097 13:49:02 -- common/autotest_common.sh@817 -- # '[' -z 2673690 ']' 00:20:00.097 13:49:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:00.097 13:49:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:00.097 13:49:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:00.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:00.097 13:49:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:00.097 13:49:02 -- common/autotest_common.sh@10 -- # set +x 00:20:00.097 [2024-04-18 13:49:02.700554] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:00.097 [2024-04-18 13:49:02.700643] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673690 ] 00:20:00.097 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:00.097 Zero copy mechanism will not be used. 00:20:00.097 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.097 [2024-04-18 13:49:02.761092] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.097 [2024-04-18 13:49:02.866666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.356 13:49:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:00.356 13:49:02 -- common/autotest_common.sh@850 -- # return 0 00:20:00.356 13:49:02 -- host/digest.sh@86 -- # false 00:20:00.356 13:49:02 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:00.357 13:49:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:00.614 13:49:03 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:00.614 13:49:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:00.871 nvme0n1 00:20:00.871 13:49:03 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:00.871 13:49:03 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:01.128 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:01.128 Zero copy mechanism will not be used. 00:20:01.128 Running I/O for 2 seconds... 00:20:03.026 00:20:03.026 Latency(us) 00:20:03.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.026 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:03.026 nvme0n1 : 2.00 3203.23 400.40 0.00 0.00 4991.35 3689.43 7330.32 00:20:03.026 =================================================================================================================== 00:20:03.026 Total : 3203.23 400.40 0.00 0.00 4991.35 3689.43 7330.32 00:20:03.026 0 00:20:03.026 13:49:05 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:03.026 13:49:05 -- host/digest.sh@93 -- # get_accel_stats 00:20:03.026 13:49:05 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:03.026 13:49:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:03.026 13:49:05 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:03.026 | select(.opcode=="crc32c") 00:20:03.026 | "\(.module_name) \(.executed)"' 00:20:03.283 13:49:05 -- host/digest.sh@94 -- # false 00:20:03.283 13:49:05 -- host/digest.sh@94 -- # exp_module=software 00:20:03.283 13:49:05 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:03.283 13:49:05 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:03.283 13:49:05 -- host/digest.sh@98 -- # killprocess 2673690 00:20:03.283 13:49:05 -- common/autotest_common.sh@936 -- # '[' -z 2673690 ']' 00:20:03.283 13:49:05 -- common/autotest_common.sh@940 -- # kill -0 2673690 00:20:03.283 13:49:05 -- common/autotest_common.sh@941 -- # uname 00:20:03.283 13:49:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:03.283 13:49:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2673690 00:20:03.283 13:49:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:03.283 13:49:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:03.283 13:49:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2673690' 00:20:03.283 killing process with pid 2673690 00:20:03.283 13:49:05 -- common/autotest_common.sh@955 -- # kill 2673690 00:20:03.283 Received shutdown signal, test time was about 2.000000 seconds 00:20:03.283 00:20:03.283 Latency(us) 00:20:03.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.283 =================================================================================================================== 00:20:03.283 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.283 13:49:05 -- common/autotest_common.sh@960 -- # wait 2673690 00:20:03.541 13:49:06 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:03.541 13:49:06 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:03.541 13:49:06 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:03.541 13:49:06 -- host/digest.sh@80 -- # rw=randwrite 00:20:03.541 13:49:06 -- host/digest.sh@80 -- # bs=4096 00:20:03.541 13:49:06 -- host/digest.sh@80 -- # qd=128 00:20:03.541 13:49:06 -- host/digest.sh@80 -- # scan_dsa=false 00:20:03.541 13:49:06 -- host/digest.sh@83 -- # bperfpid=2674211 00:20:03.541 13:49:06 -- host/digest.sh@84 -- # waitforlisten 2674211 /var/tmp/bperf.sock 00:20:03.541 13:49:06 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:03.541 13:49:06 -- common/autotest_common.sh@817 -- # '[' -z 2674211 ']' 00:20:03.541 13:49:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:03.541 13:49:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:03.541 13:49:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:03.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:03.541 13:49:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:03.541 13:49:06 -- common/autotest_common.sh@10 -- # set +x 00:20:03.541 [2024-04-18 13:49:06.298089] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:03.541 [2024-04-18 13:49:06.298197] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674211 ] 00:20:03.541 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.798 [2024-04-18 13:49:06.361933] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.798 [2024-04-18 13:49:06.473440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.798 13:49:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:03.798 13:49:06 -- common/autotest_common.sh@850 -- # return 0 00:20:03.798 13:49:06 -- host/digest.sh@86 -- # false 00:20:03.798 13:49:06 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:03.798 13:49:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:04.056 13:49:06 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:04.056 13:49:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:04.313 nvme0n1 00:20:04.570 13:49:07 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:04.570 13:49:07 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:04.570 Running I/O for 2 seconds... 00:20:06.469 00:20:06.469 Latency(us) 00:20:06.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.470 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:06.470 nvme0n1 : 2.01 19386.01 75.73 0.00 0.00 6588.90 5946.79 15437.37 00:20:06.470 =================================================================================================================== 00:20:06.470 Total : 19386.01 75.73 0.00 0.00 6588.90 5946.79 15437.37 00:20:06.470 0 00:20:06.470 13:49:09 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:06.470 13:49:09 -- host/digest.sh@93 -- # get_accel_stats 00:20:06.470 13:49:09 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:06.470 13:49:09 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:06.470 | select(.opcode=="crc32c") 00:20:06.470 | "\(.module_name) \(.executed)"' 00:20:06.470 13:49:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:06.728 13:49:09 -- host/digest.sh@94 -- # false 00:20:06.728 13:49:09 -- host/digest.sh@94 -- # exp_module=software 00:20:06.728 13:49:09 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:06.728 13:49:09 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:06.728 13:49:09 -- host/digest.sh@98 -- # killprocess 2674211 00:20:06.728 13:49:09 -- common/autotest_common.sh@936 -- # '[' -z 2674211 ']' 00:20:06.728 13:49:09 -- common/autotest_common.sh@940 -- # kill -0 2674211 00:20:06.728 13:49:09 -- common/autotest_common.sh@941 -- # uname 00:20:06.728 13:49:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.728 13:49:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2674211 00:20:06.985 13:49:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:06.985 13:49:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:06.985 13:49:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2674211' 00:20:06.985 killing process with pid 2674211 00:20:06.985 13:49:09 -- common/autotest_common.sh@955 -- # kill 2674211 00:20:06.985 Received shutdown signal, test time was about 2.000000 seconds 00:20:06.985 00:20:06.985 Latency(us) 00:20:06.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.985 =================================================================================================================== 00:20:06.985 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.985 13:49:09 -- common/autotest_common.sh@960 -- # wait 2674211 00:20:07.243 13:49:09 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:07.243 13:49:09 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:07.243 13:49:09 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:07.243 13:49:09 -- host/digest.sh@80 -- # rw=randwrite 00:20:07.243 13:49:09 -- host/digest.sh@80 -- # bs=131072 00:20:07.243 13:49:09 -- host/digest.sh@80 -- # qd=16 00:20:07.243 13:49:09 -- host/digest.sh@80 -- # scan_dsa=false 00:20:07.243 13:49:09 -- host/digest.sh@83 -- # bperfpid=2674621 00:20:07.243 13:49:09 -- host/digest.sh@84 -- # waitforlisten 2674621 /var/tmp/bperf.sock 00:20:07.243 13:49:09 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:07.243 13:49:09 -- common/autotest_common.sh@817 -- # '[' -z 2674621 ']' 00:20:07.243 13:49:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:07.243 13:49:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:07.243 13:49:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:07.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:07.243 13:49:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:07.243 13:49:09 -- common/autotest_common.sh@10 -- # set +x 00:20:07.243 [2024-04-18 13:49:09.863938] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:07.243 [2024-04-18 13:49:09.864027] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674621 ] 00:20:07.243 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:07.243 Zero copy mechanism will not be used. 00:20:07.243 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.243 [2024-04-18 13:49:09.925845] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.243 [2024-04-18 13:49:10.041279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.501 13:49:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:07.501 13:49:10 -- common/autotest_common.sh@850 -- # return 0 00:20:07.501 13:49:10 -- host/digest.sh@86 -- # false 00:20:07.501 13:49:10 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:07.501 13:49:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:07.759 13:49:10 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:07.759 13:49:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:08.324 nvme0n1 00:20:08.324 13:49:10 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:08.324 13:49:10 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:08.324 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:08.324 Zero copy mechanism will not be used. 00:20:08.324 Running I/O for 2 seconds... 00:20:10.224 00:20:10.224 Latency(us) 00:20:10.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.224 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:10.224 nvme0n1 : 2.00 3929.77 491.22 0.00 0.00 4063.11 2900.57 12524.66 00:20:10.224 =================================================================================================================== 00:20:10.224 Total : 3929.77 491.22 0.00 0.00 4063.11 2900.57 12524.66 00:20:10.224 0 00:20:10.481 13:49:13 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:10.481 13:49:13 -- host/digest.sh@93 -- # get_accel_stats 00:20:10.481 13:49:13 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:10.481 13:49:13 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:10.481 | select(.opcode=="crc32c") 00:20:10.481 | "\(.module_name) \(.executed)"' 00:20:10.481 13:49:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:10.481 13:49:13 -- host/digest.sh@94 -- # false 00:20:10.481 13:49:13 -- host/digest.sh@94 -- # exp_module=software 00:20:10.481 13:49:13 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:10.481 13:49:13 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:10.481 13:49:13 -- host/digest.sh@98 -- # killprocess 2674621 00:20:10.481 13:49:13 -- common/autotest_common.sh@936 -- # '[' -z 2674621 ']' 00:20:10.481 13:49:13 -- common/autotest_common.sh@940 -- # kill -0 2674621 00:20:10.481 13:49:13 -- common/autotest_common.sh@941 -- # uname 00:20:10.481 13:49:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:10.481 13:49:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2674621 00:20:10.740 13:49:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:10.740 13:49:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:10.740 13:49:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2674621' 00:20:10.740 killing process with pid 2674621 00:20:10.740 13:49:13 -- common/autotest_common.sh@955 -- # kill 2674621 00:20:10.740 Received shutdown signal, test time was about 2.000000 seconds 00:20:10.740 00:20:10.740 Latency(us) 00:20:10.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.740 =================================================================================================================== 00:20:10.740 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.740 13:49:13 -- common/autotest_common.sh@960 -- # wait 2674621 00:20:10.998 13:49:13 -- host/digest.sh@132 -- # killprocess 2673253 00:20:10.998 13:49:13 -- common/autotest_common.sh@936 -- # '[' -z 2673253 ']' 00:20:10.998 13:49:13 -- common/autotest_common.sh@940 -- # kill -0 2673253 00:20:10.998 13:49:13 -- common/autotest_common.sh@941 -- # uname 00:20:10.998 13:49:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:10.998 13:49:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2673253 00:20:10.998 13:49:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:10.998 13:49:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:10.998 13:49:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2673253' 00:20:10.998 killing process with pid 2673253 00:20:10.998 13:49:13 -- common/autotest_common.sh@955 -- # kill 2673253 00:20:10.998 13:49:13 -- common/autotest_common.sh@960 -- # wait 2673253 00:20:11.256 00:20:11.256 real 0m15.445s 00:20:11.256 user 0m29.900s 00:20:11.256 sys 0m4.823s 00:20:11.256 13:49:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:11.256 13:49:13 -- common/autotest_common.sh@10 -- # set +x 00:20:11.256 ************************************ 00:20:11.256 END TEST nvmf_digest_clean 00:20:11.256 ************************************ 00:20:11.256 13:49:13 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:11.256 13:49:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:11.256 13:49:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:11.256 13:49:13 -- common/autotest_common.sh@10 -- # set +x 00:20:11.256 ************************************ 00:20:11.256 START TEST nvmf_digest_error 00:20:11.256 ************************************ 00:20:11.256 13:49:14 -- common/autotest_common.sh@1111 -- # run_digest_error 00:20:11.256 13:49:14 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:11.256 13:49:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:11.256 13:49:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:11.256 13:49:14 -- common/autotest_common.sh@10 -- # set +x 00:20:11.256 13:49:14 -- nvmf/common.sh@470 -- # nvmfpid=2675070 00:20:11.256 13:49:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:11.256 13:49:14 -- nvmf/common.sh@471 -- # waitforlisten 2675070 00:20:11.256 13:49:14 -- common/autotest_common.sh@817 -- # '[' -z 2675070 ']' 00:20:11.256 13:49:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.256 13:49:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:11.256 13:49:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.256 13:49:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:11.256 13:49:14 -- common/autotest_common.sh@10 -- # set +x 00:20:11.515 [2024-04-18 13:49:14.084187] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:11.515 [2024-04-18 13:49:14.084290] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.515 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.515 [2024-04-18 13:49:14.149783] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.515 [2024-04-18 13:49:14.257740] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.515 [2024-04-18 13:49:14.257806] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.515 [2024-04-18 13:49:14.257844] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.515 [2024-04-18 13:49:14.257856] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.515 [2024-04-18 13:49:14.257867] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.515 [2024-04-18 13:49:14.257894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.515 13:49:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:11.515 13:49:14 -- common/autotest_common.sh@850 -- # return 0 00:20:11.515 13:49:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:11.515 13:49:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:11.515 13:49:14 -- common/autotest_common.sh@10 -- # set +x 00:20:11.515 13:49:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.515 13:49:14 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:11.515 13:49:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.515 13:49:14 -- common/autotest_common.sh@10 -- # set +x 00:20:11.515 [2024-04-18 13:49:14.310521] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:11.515 13:49:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.515 13:49:14 -- host/digest.sh@105 -- # common_target_config 00:20:11.515 13:49:14 -- host/digest.sh@43 -- # rpc_cmd 00:20:11.515 13:49:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.515 13:49:14 -- common/autotest_common.sh@10 -- # set +x 00:20:11.774 null0 00:20:11.774 [2024-04-18 13:49:14.435621] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.774 [2024-04-18 13:49:14.459862] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.774 13:49:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.774 13:49:14 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:11.774 13:49:14 -- host/digest.sh@54 -- # local rw bs qd 00:20:11.774 13:49:14 -- host/digest.sh@56 -- # rw=randread 00:20:11.774 13:49:14 -- host/digest.sh@56 -- # bs=4096 00:20:11.774 13:49:14 -- host/digest.sh@56 -- # qd=128 00:20:11.774 13:49:14 -- host/digest.sh@58 -- # bperfpid=2675213 00:20:11.774 13:49:14 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:11.774 13:49:14 -- host/digest.sh@60 -- # waitforlisten 2675213 /var/tmp/bperf.sock 00:20:11.774 13:49:14 -- common/autotest_common.sh@817 -- # '[' -z 2675213 ']' 00:20:11.774 13:49:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:11.774 13:49:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:11.774 13:49:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:11.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:11.774 13:49:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:11.774 13:49:14 -- common/autotest_common.sh@10 -- # set +x 00:20:11.774 [2024-04-18 13:49:14.506052] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:11.774 [2024-04-18 13:49:14.506128] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2675213 ] 00:20:11.774 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.774 [2024-04-18 13:49:14.567734] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.061 [2024-04-18 13:49:14.682505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.061 13:49:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:12.061 13:49:14 -- common/autotest_common.sh@850 -- # return 0 00:20:12.061 13:49:14 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:12.061 13:49:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:12.319 13:49:15 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:12.319 13:49:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.319 13:49:15 -- common/autotest_common.sh@10 -- # set +x 00:20:12.319 13:49:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.319 13:49:15 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:12.319 13:49:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:12.577 nvme0n1 00:20:12.577 13:49:15 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:12.577 13:49:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.577 13:49:15 -- common/autotest_common.sh@10 -- # set +x 00:20:12.577 13:49:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.577 13:49:15 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:12.577 13:49:15 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:12.836 Running I/O for 2 seconds... 00:20:12.836 [2024-04-18 13:49:15.492589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:12.836 [2024-04-18 13:49:15.492641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-04-18 13:49:15.492665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.836 [2024-04-18 13:49:15.505204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:12.836 [2024-04-18 13:49:15.505260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-04-18 13:49:15.505293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.836 [2024-04-18 13:49:15.522080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:12.836 [2024-04-18 13:49:15.522115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-04-18 13:49:15.522136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.836 [2024-04-18 13:49:15.533695] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:12.836 [2024-04-18 13:49:15.533730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-04-18 13:49:15.533757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.836 [2024-04-18 13:49:15.547588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:12.836 [2024-04-18 13:49:15.547624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-04-18 13:49:15.547643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.836 [2024-04-18 13:49:15.561357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:12.836 [2024-04-18 13:49:15.561385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-04-18 13:49:15.561417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.836 [2024-04-18 13:49:15.577390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:12.836 [2024-04-18 13:49:15.577422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-04-18 13:49:15.577462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.836 [2024-04-18 13:49:15.589723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:12.836 [2024-04-18 13:49:15.589757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-04-18 13:49:15.589776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.836 [2024-04-18 13:49:15.603405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:12.836 [2024-04-18 13:49:15.603433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-04-18 13:49:15.603466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.836 [2024-04-18 13:49:15.617337] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:12.836 [2024-04-18 13:49:15.617365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-04-18 13:49:15.617396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.836 [2024-04-18 13:49:15.631534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:12.836 [2024-04-18 13:49:15.631568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.836 [2024-04-18 13:49:15.631587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.643462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.643496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.643515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.659372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.659401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.659431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.672521] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.672555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.672574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.684791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.684826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.684845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.699727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.699767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.699787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.714164] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.714224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.714241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.727439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.727466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.727482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.741825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.741859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.741878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.753643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.753677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.753696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.768010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.768045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.768064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.783373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.783401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.783431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.795193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.795238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.795254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.809793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.809826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.809845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.823459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.823486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.823502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.835805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.835839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.835858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.850029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.850063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.850082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.863345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.863373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.863404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.876790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.876823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.876842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.094 [2024-04-18 13:49:15.890699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.094 [2024-04-18 13:49:15.890733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.094 [2024-04-18 13:49:15.890752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.353 [2024-04-18 13:49:15.904760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.353 [2024-04-18 13:49:15.904793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.353 [2024-04-18 13:49:15.904811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.353 [2024-04-18 13:49:15.918152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.353 [2024-04-18 13:49:15.918194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.353 [2024-04-18 13:49:15.918214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.353 [2024-04-18 13:49:15.931428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.353 [2024-04-18 13:49:15.931455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.353 [2024-04-18 13:49:15.931490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.353 [2024-04-18 13:49:15.946301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.353 [2024-04-18 13:49:15.946329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.353 [2024-04-18 13:49:15.946360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.353 [2024-04-18 13:49:15.959955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.353 [2024-04-18 13:49:15.959990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.353 [2024-04-18 13:49:15.960010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.353 [2024-04-18 13:49:15.973621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.353 [2024-04-18 13:49:15.973654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.353 [2024-04-18 13:49:15.973673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.353 [2024-04-18 13:49:15.986589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.353 [2024-04-18 13:49:15.986621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.353 [2024-04-18 13:49:15.986641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.353 [2024-04-18 13:49:16.001898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.353 [2024-04-18 13:49:16.001932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.353 [2024-04-18 13:49:16.001951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.353 [2024-04-18 13:49:16.013886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.353 [2024-04-18 13:49:16.013919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.353 [2024-04-18 13:49:16.013937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.353 [2024-04-18 13:49:16.031372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.353 [2024-04-18 13:49:16.031400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.353 [2024-04-18 13:49:16.031431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.353 [2024-04-18 13:49:16.045247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.353 [2024-04-18 13:49:16.045276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.353 [2024-04-18 13:49:16.045309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.353 [2024-04-18 13:49:16.056952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.353 [2024-04-18 13:49:16.056986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.353 [2024-04-18 13:49:16.057006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.353 [2024-04-18 13:49:16.072659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.353 [2024-04-18 13:49:16.072693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.354 [2024-04-18 13:49:16.072712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.354 [2024-04-18 13:49:16.086609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.354 [2024-04-18 13:49:16.086647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.354 [2024-04-18 13:49:16.086666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.354 [2024-04-18 13:49:16.101202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.354 [2024-04-18 13:49:16.101256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.354 [2024-04-18 13:49:16.101272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.354 [2024-04-18 13:49:16.118633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.354 [2024-04-18 13:49:16.118668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.354 [2024-04-18 13:49:16.118687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.354 [2024-04-18 13:49:16.133864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.354 [2024-04-18 13:49:16.133897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.354 [2024-04-18 13:49:16.133917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.354 [2024-04-18 13:49:16.147057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.354 [2024-04-18 13:49:16.147091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.354 [2024-04-18 13:49:16.147110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.612 [2024-04-18 13:49:16.160687] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.612 [2024-04-18 13:49:16.160721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.612 [2024-04-18 13:49:16.160740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.612 [2024-04-18 13:49:16.175780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.612 [2024-04-18 13:49:16.175814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.612 [2024-04-18 13:49:16.175838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.612 [2024-04-18 13:49:16.188740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.612 [2024-04-18 13:49:16.188773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.612 [2024-04-18 13:49:16.188798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.612 [2024-04-18 13:49:16.201873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.612 [2024-04-18 13:49:16.201908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.612 [2024-04-18 13:49:16.201927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.612 [2024-04-18 13:49:16.215040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.612 [2024-04-18 13:49:16.215073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.612 [2024-04-18 13:49:16.215092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.612 [2024-04-18 13:49:16.229672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.612 [2024-04-18 13:49:16.229705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.612 [2024-04-18 13:49:16.229724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.612 [2024-04-18 13:49:16.242811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.612 [2024-04-18 13:49:16.242840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.612 [2024-04-18 13:49:16.242856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.612 [2024-04-18 13:49:16.254985] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.612 [2024-04-18 13:49:16.255024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.612 [2024-04-18 13:49:16.255055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.612 [2024-04-18 13:49:16.265530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.612 [2024-04-18 13:49:16.265559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.612 [2024-04-18 13:49:16.265589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.612 [2024-04-18 13:49:16.278635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.612 [2024-04-18 13:49:16.278663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.612 [2024-04-18 13:49:16.278693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.612 [2024-04-18 13:49:16.291238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.613 [2024-04-18 13:49:16.291273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.613 [2024-04-18 13:49:16.291306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.613 [2024-04-18 13:49:16.303772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.613 [2024-04-18 13:49:16.303807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.613 [2024-04-18 13:49:16.303836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.613 [2024-04-18 13:49:16.314541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.613 [2024-04-18 13:49:16.314569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.613 [2024-04-18 13:49:16.314600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.613 [2024-04-18 13:49:16.327907] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.613 [2024-04-18 13:49:16.327935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.613 [2024-04-18 13:49:16.327966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.613 [2024-04-18 13:49:16.339922] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.613 [2024-04-18 13:49:16.339950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.613 [2024-04-18 13:49:16.339980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.613 [2024-04-18 13:49:16.353028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.613 [2024-04-18 13:49:16.353058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.613 [2024-04-18 13:49:16.353090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.613 [2024-04-18 13:49:16.364279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.613 [2024-04-18 13:49:16.364307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.613 [2024-04-18 13:49:16.364339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.613 [2024-04-18 13:49:16.377779] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.613 [2024-04-18 13:49:16.377807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.613 [2024-04-18 13:49:16.377838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.613 [2024-04-18 13:49:16.390077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.613 [2024-04-18 13:49:16.390105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.613 [2024-04-18 13:49:16.390136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.613 [2024-04-18 13:49:16.400235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.613 [2024-04-18 13:49:16.400263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.613 [2024-04-18 13:49:16.400295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.613 [2024-04-18 13:49:16.414675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.613 [2024-04-18 13:49:16.414703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.613 [2024-04-18 13:49:16.414719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.871 [2024-04-18 13:49:16.428693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.871 [2024-04-18 13:49:16.428721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.871 [2024-04-18 13:49:16.428753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.871 [2024-04-18 13:49:16.439285] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.871 [2024-04-18 13:49:16.439313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.871 [2024-04-18 13:49:16.439344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.871 [2024-04-18 13:49:16.453550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.871 [2024-04-18 13:49:16.453578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.871 [2024-04-18 13:49:16.453608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.871 [2024-04-18 13:49:16.466035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.871 [2024-04-18 13:49:16.466063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.871 [2024-04-18 13:49:16.466094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.871 [2024-04-18 13:49:16.476025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.871 [2024-04-18 13:49:16.476052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.871 [2024-04-18 13:49:16.476082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.871 [2024-04-18 13:49:16.489722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.871 [2024-04-18 13:49:16.489750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.871 [2024-04-18 13:49:16.489781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.871 [2024-04-18 13:49:16.503155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.871 [2024-04-18 13:49:16.503205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.871 [2024-04-18 13:49:16.503232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.871 [2024-04-18 13:49:16.512950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.872 [2024-04-18 13:49:16.512977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.872 [2024-04-18 13:49:16.513009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.872 [2024-04-18 13:49:16.525880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.872 [2024-04-18 13:49:16.525909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.872 [2024-04-18 13:49:16.525940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.872 [2024-04-18 13:49:16.539144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.872 [2024-04-18 13:49:16.539194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.872 [2024-04-18 13:49:16.539212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.872 [2024-04-18 13:49:16.550281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.872 [2024-04-18 13:49:16.550310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.872 [2024-04-18 13:49:16.550327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.872 [2024-04-18 13:49:16.564696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.872 [2024-04-18 13:49:16.564723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.872 [2024-04-18 13:49:16.564754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.872 [2024-04-18 13:49:16.574930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.872 [2024-04-18 13:49:16.574957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.872 [2024-04-18 13:49:16.574987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.872 [2024-04-18 13:49:16.588289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.872 [2024-04-18 13:49:16.588316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.872 [2024-04-18 13:49:16.588347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.872 [2024-04-18 13:49:16.598781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.872 [2024-04-18 13:49:16.598809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.872 [2024-04-18 13:49:16.598840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.872 [2024-04-18 13:49:16.612143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.872 [2024-04-18 13:49:16.612200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.872 [2024-04-18 13:49:16.612220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.872 [2024-04-18 13:49:16.624138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.872 [2024-04-18 13:49:16.624165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.872 [2024-04-18 13:49:16.624204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.872 [2024-04-18 13:49:16.634728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.872 [2024-04-18 13:49:16.634755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.872 [2024-04-18 13:49:16.634786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.872 [2024-04-18 13:49:16.647062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.872 [2024-04-18 13:49:16.647090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.872 [2024-04-18 13:49:16.647121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.872 [2024-04-18 13:49:16.660170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.872 [2024-04-18 13:49:16.660220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.872 [2024-04-18 13:49:16.660236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.872 [2024-04-18 13:49:16.671479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:13.872 [2024-04-18 13:49:16.671520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.872 [2024-04-18 13:49:16.671535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.130 [2024-04-18 13:49:16.685125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.130 [2024-04-18 13:49:16.685155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.130 [2024-04-18 13:49:16.685195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.130 [2024-04-18 13:49:16.696825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.130 [2024-04-18 13:49:16.696852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.130 [2024-04-18 13:49:16.696882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.130 [2024-04-18 13:49:16.707393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.130 [2024-04-18 13:49:16.707425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.130 [2024-04-18 13:49:16.707464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.130 [2024-04-18 13:49:16.720275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.130 [2024-04-18 13:49:16.720303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.130 [2024-04-18 13:49:16.720333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.130 [2024-04-18 13:49:16.732097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.130 [2024-04-18 13:49:16.732124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.130 [2024-04-18 13:49:16.732154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.130 [2024-04-18 13:49:16.745085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.130 [2024-04-18 13:49:16.745113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.130 [2024-04-18 13:49:16.745144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.131 [2024-04-18 13:49:16.755999] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.131 [2024-04-18 13:49:16.756026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.131 [2024-04-18 13:49:16.756055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.131 [2024-04-18 13:49:16.767841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.131 [2024-04-18 13:49:16.767868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.131 [2024-04-18 13:49:16.767899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.131 [2024-04-18 13:49:16.780459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.131 [2024-04-18 13:49:16.780502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.131 [2024-04-18 13:49:16.780518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.131 [2024-04-18 13:49:16.791210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.131 [2024-04-18 13:49:16.791238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.131 [2024-04-18 13:49:16.791269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.131 [2024-04-18 13:49:16.805102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.131 [2024-04-18 13:49:16.805130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.131 [2024-04-18 13:49:16.805161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.131 [2024-04-18 13:49:16.818502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.131 [2024-04-18 13:49:16.818550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.131 [2024-04-18 13:49:16.818567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.131 [2024-04-18 13:49:16.828219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.131 [2024-04-18 13:49:16.828246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.131 [2024-04-18 13:49:16.828277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.131 [2024-04-18 13:49:16.842973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.131 [2024-04-18 13:49:16.843000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.131 [2024-04-18 13:49:16.843031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.131 [2024-04-18 13:49:16.856190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.131 [2024-04-18 13:49:16.856220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.131 [2024-04-18 13:49:16.856251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.131 [2024-04-18 13:49:16.867800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.131 [2024-04-18 13:49:16.867828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.131 [2024-04-18 13:49:16.867860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.131 [2024-04-18 13:49:16.880759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.131 [2024-04-18 13:49:16.880786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.131 [2024-04-18 13:49:16.880817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.131 [2024-04-18 13:49:16.893699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.131 [2024-04-18 13:49:16.893727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.131 [2024-04-18 13:49:16.893759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.131 [2024-04-18 13:49:16.904941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.131 [2024-04-18 13:49:16.904968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.131 [2024-04-18 13:49:16.904998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.131 [2024-04-18 13:49:16.917775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.131 [2024-04-18 13:49:16.917803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.131 [2024-04-18 13:49:16.917833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.131 [2024-04-18 13:49:16.927837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.131 [2024-04-18 13:49:16.927863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.131 [2024-04-18 13:49:16.927893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.389 [2024-04-18 13:49:16.942568] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.389 [2024-04-18 13:49:16.942596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.389 [2024-04-18 13:49:16.942627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.389 [2024-04-18 13:49:16.954884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.389 [2024-04-18 13:49:16.954912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.389 [2024-04-18 13:49:16.954943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.389 [2024-04-18 13:49:16.968345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.389 [2024-04-18 13:49:16.968374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.389 [2024-04-18 13:49:16.968405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.389 [2024-04-18 13:49:16.980096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.389 [2024-04-18 13:49:16.980125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.389 [2024-04-18 13:49:16.980157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.389 [2024-04-18 13:49:16.992751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.389 [2024-04-18 13:49:16.992784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.389 [2024-04-18 13:49:16.992803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.389 [2024-04-18 13:49:17.007867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.390 [2024-04-18 13:49:17.007901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.390 [2024-04-18 13:49:17.007920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.390 [2024-04-18 13:49:17.021054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.390 [2024-04-18 13:49:17.021088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.390 [2024-04-18 13:49:17.021107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.390 [2024-04-18 13:49:17.035227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.390 [2024-04-18 13:49:17.035255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.390 [2024-04-18 13:49:17.035292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.390 [2024-04-18 13:49:17.048110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.390 [2024-04-18 13:49:17.048144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.390 [2024-04-18 13:49:17.048164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.390 [2024-04-18 13:49:17.062166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.390 [2024-04-18 13:49:17.062207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.390 [2024-04-18 13:49:17.062239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.390 [2024-04-18 13:49:17.075688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.390 [2024-04-18 13:49:17.075722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.390 [2024-04-18 13:49:17.075740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.390 [2024-04-18 13:49:17.087513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.390 [2024-04-18 13:49:17.087546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.390 [2024-04-18 13:49:17.087565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.390 [2024-04-18 13:49:17.103619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.390 [2024-04-18 13:49:17.103654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.390 [2024-04-18 13:49:17.103673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.390 [2024-04-18 13:49:17.117502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.390 [2024-04-18 13:49:17.117535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.390 [2024-04-18 13:49:17.117554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.390 [2024-04-18 13:49:17.131297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.390 [2024-04-18 13:49:17.131325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.390 [2024-04-18 13:49:17.131357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.390 [2024-04-18 13:49:17.145203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.390 [2024-04-18 13:49:17.145245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.390 [2024-04-18 13:49:17.145261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.390 [2024-04-18 13:49:17.159168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.390 [2024-04-18 13:49:17.159218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.390 [2024-04-18 13:49:17.159252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.390 [2024-04-18 13:49:17.170960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.390 [2024-04-18 13:49:17.170993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.390 [2024-04-18 13:49:17.171012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.390 [2024-04-18 13:49:17.185524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.390 [2024-04-18 13:49:17.185557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.390 [2024-04-18 13:49:17.185576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.648 [2024-04-18 13:49:17.200603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.648 [2024-04-18 13:49:17.200638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.648 [2024-04-18 13:49:17.200656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.648 [2024-04-18 13:49:17.213163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.648 [2024-04-18 13:49:17.213206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.648 [2024-04-18 13:49:17.213226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.648 [2024-04-18 13:49:17.225910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.648 [2024-04-18 13:49:17.225944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.648 [2024-04-18 13:49:17.225963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.648 [2024-04-18 13:49:17.239395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.648 [2024-04-18 13:49:17.239422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.648 [2024-04-18 13:49:17.239452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.648 [2024-04-18 13:49:17.252334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.648 [2024-04-18 13:49:17.252366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.648 [2024-04-18 13:49:17.252398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.648 [2024-04-18 13:49:17.267560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.648 [2024-04-18 13:49:17.267594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.648 [2024-04-18 13:49:17.267613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.648 [2024-04-18 13:49:17.281735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.648 [2024-04-18 13:49:17.281768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.648 [2024-04-18 13:49:17.281787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.648 [2024-04-18 13:49:17.293449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.648 [2024-04-18 13:49:17.293491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.648 [2024-04-18 13:49:17.293506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.648 [2024-04-18 13:49:17.310212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.648 [2024-04-18 13:49:17.310254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.649 [2024-04-18 13:49:17.310269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.649 [2024-04-18 13:49:17.325598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.649 [2024-04-18 13:49:17.325632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.649 [2024-04-18 13:49:17.325651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.649 [2024-04-18 13:49:17.340763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.649 [2024-04-18 13:49:17.340797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.649 [2024-04-18 13:49:17.340817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.649 [2024-04-18 13:49:17.354563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.649 [2024-04-18 13:49:17.354596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.649 [2024-04-18 13:49:17.354615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.649 [2024-04-18 13:49:17.369265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.649 [2024-04-18 13:49:17.369293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.649 [2024-04-18 13:49:17.369324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.649 [2024-04-18 13:49:17.380788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.649 [2024-04-18 13:49:17.380821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.649 [2024-04-18 13:49:17.380840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.649 [2024-04-18 13:49:17.397225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.649 [2024-04-18 13:49:17.397260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.649 [2024-04-18 13:49:17.397291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.649 [2024-04-18 13:49:17.409336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.649 [2024-04-18 13:49:17.409363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.649 [2024-04-18 13:49:17.409393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.649 [2024-04-18 13:49:17.423145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.649 [2024-04-18 13:49:17.423187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.649 [2024-04-18 13:49:17.423207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.649 [2024-04-18 13:49:17.436907] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.649 [2024-04-18 13:49:17.436940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.649 [2024-04-18 13:49:17.436959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.649 [2024-04-18 13:49:17.450281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.649 [2024-04-18 13:49:17.450310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.649 [2024-04-18 13:49:17.450342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.906 [2024-04-18 13:49:17.465354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ae7e80) 00:20:14.906 [2024-04-18 13:49:17.465382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.906 [2024-04-18 13:49:17.465413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.906 00:20:14.906 Latency(us) 00:20:14.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.906 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:14.906 nvme0n1 : 2.01 19173.83 74.90 0.00 0.00 6668.90 3106.89 21651.15 00:20:14.906 =================================================================================================================== 00:20:14.906 Total : 19173.83 74.90 0.00 0.00 6668.90 3106.89 21651.15 00:20:14.906 0 00:20:14.906 13:49:17 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:14.906 13:49:17 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:14.906 13:49:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:14.906 13:49:17 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:14.906 | .driver_specific 00:20:14.906 | .nvme_error 00:20:14.906 | .status_code 00:20:14.906 | .command_transient_transport_error' 00:20:15.165 13:49:17 -- host/digest.sh@71 -- # (( 150 > 0 )) 00:20:15.165 13:49:17 -- host/digest.sh@73 -- # killprocess 2675213 00:20:15.165 13:49:17 -- common/autotest_common.sh@936 -- # '[' -z 2675213 ']' 00:20:15.165 13:49:17 -- common/autotest_common.sh@940 -- # kill -0 2675213 00:20:15.165 13:49:17 -- common/autotest_common.sh@941 -- # uname 00:20:15.165 13:49:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:15.165 13:49:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2675213 00:20:15.165 13:49:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:15.165 13:49:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:15.165 13:49:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2675213' 00:20:15.165 killing process with pid 2675213 00:20:15.165 13:49:17 -- common/autotest_common.sh@955 -- # kill 2675213 00:20:15.165 Received shutdown signal, test time was about 2.000000 seconds 00:20:15.165 00:20:15.165 Latency(us) 00:20:15.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.165 =================================================================================================================== 00:20:15.165 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.165 13:49:17 -- common/autotest_common.sh@960 -- # wait 2675213 00:20:15.423 13:49:18 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:15.423 13:49:18 -- host/digest.sh@54 -- # local rw bs qd 00:20:15.423 13:49:18 -- host/digest.sh@56 -- # rw=randread 00:20:15.423 13:49:18 -- host/digest.sh@56 -- # bs=131072 00:20:15.423 13:49:18 -- host/digest.sh@56 -- # qd=16 00:20:15.423 13:49:18 -- host/digest.sh@58 -- # bperfpid=2675622 00:20:15.423 13:49:18 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:15.423 13:49:18 -- host/digest.sh@60 -- # waitforlisten 2675622 /var/tmp/bperf.sock 00:20:15.423 13:49:18 -- common/autotest_common.sh@817 -- # '[' -z 2675622 ']' 00:20:15.423 13:49:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:15.423 13:49:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:15.423 13:49:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:15.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:15.423 13:49:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:15.423 13:49:18 -- common/autotest_common.sh@10 -- # set +x 00:20:15.423 [2024-04-18 13:49:18.088175] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:15.423 [2024-04-18 13:49:18.088270] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2675622 ] 00:20:15.423 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:15.423 Zero copy mechanism will not be used. 00:20:15.423 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.423 [2024-04-18 13:49:18.145745] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.681 [2024-04-18 13:49:18.251703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.681 13:49:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:15.681 13:49:18 -- common/autotest_common.sh@850 -- # return 0 00:20:15.681 13:49:18 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:15.681 13:49:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:15.939 13:49:18 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:15.939 13:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.939 13:49:18 -- common/autotest_common.sh@10 -- # set +x 00:20:15.939 13:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.939 13:49:18 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:15.939 13:49:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:16.504 nvme0n1 00:20:16.504 13:49:19 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:16.504 13:49:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.504 13:49:19 -- common/autotest_common.sh@10 -- # set +x 00:20:16.504 13:49:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.504 13:49:19 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:16.504 13:49:19 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:16.504 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:16.504 Zero copy mechanism will not be used. 00:20:16.504 Running I/O for 2 seconds... 00:20:16.504 [2024-04-18 13:49:19.189109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.505 [2024-04-18 13:49:19.189164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.505 [2024-04-18 13:49:19.189197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:16.505 [2024-04-18 13:49:19.198197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.505 [2024-04-18 13:49:19.198243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.505 [2024-04-18 13:49:19.198259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:16.505 [2024-04-18 13:49:19.207202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.505 [2024-04-18 13:49:19.207244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.505 [2024-04-18 13:49:19.207259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:16.505 [2024-04-18 13:49:19.216787] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.505 [2024-04-18 13:49:19.216820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.505 [2024-04-18 13:49:19.216840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.505 [2024-04-18 13:49:19.228072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.505 [2024-04-18 13:49:19.228108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.505 [2024-04-18 13:49:19.228127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:16.505 [2024-04-18 13:49:19.241875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.505 [2024-04-18 13:49:19.241907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.505 [2024-04-18 13:49:19.241926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:16.505 [2024-04-18 13:49:19.256570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.505 [2024-04-18 13:49:19.256602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.505 [2024-04-18 13:49:19.256621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:16.505 [2024-04-18 13:49:19.271994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.505 [2024-04-18 13:49:19.272026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.505 [2024-04-18 13:49:19.272053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.505 [2024-04-18 13:49:19.287432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.505 [2024-04-18 13:49:19.287475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.505 [2024-04-18 13:49:19.287495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:16.505 [2024-04-18 13:49:19.302719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.505 [2024-04-18 13:49:19.302752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.505 [2024-04-18 13:49:19.302771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:16.763 [2024-04-18 13:49:19.318328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.763 [2024-04-18 13:49:19.318356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.763 [2024-04-18 13:49:19.318371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:16.763 [2024-04-18 13:49:19.333527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.763 [2024-04-18 13:49:19.333559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.763 [2024-04-18 13:49:19.333577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.763 [2024-04-18 13:49:19.348844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.763 [2024-04-18 13:49:19.348877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.763 [2024-04-18 13:49:19.348895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:16.763 [2024-04-18 13:49:19.364080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.763 [2024-04-18 13:49:19.364113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.763 [2024-04-18 13:49:19.364131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:16.763 [2024-04-18 13:49:19.378860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.763 [2024-04-18 13:49:19.378893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.763 [2024-04-18 13:49:19.378912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:16.763 [2024-04-18 13:49:19.393818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.763 [2024-04-18 13:49:19.393851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.763 [2024-04-18 13:49:19.393869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.763 [2024-04-18 13:49:19.409202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.763 [2024-04-18 13:49:19.409250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.763 [2024-04-18 13:49:19.409266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:16.763 [2024-04-18 13:49:19.423963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.763 [2024-04-18 13:49:19.423996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.763 [2024-04-18 13:49:19.424015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:16.763 [2024-04-18 13:49:19.438957] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.763 [2024-04-18 13:49:19.438989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.763 [2024-04-18 13:49:19.439008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:16.763 [2024-04-18 13:49:19.451027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.763 [2024-04-18 13:49:19.451061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.763 [2024-04-18 13:49:19.451080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.763 [2024-04-18 13:49:19.460451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.763 [2024-04-18 13:49:19.460494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.764 [2024-04-18 13:49:19.460514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:16.764 [2024-04-18 13:49:19.470013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.764 [2024-04-18 13:49:19.470045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.764 [2024-04-18 13:49:19.470063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:16.764 [2024-04-18 13:49:19.479909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.764 [2024-04-18 13:49:19.479942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.764 [2024-04-18 13:49:19.479959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:16.764 [2024-04-18 13:49:19.489653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.764 [2024-04-18 13:49:19.489685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.764 [2024-04-18 13:49:19.489704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.764 [2024-04-18 13:49:19.500353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.764 [2024-04-18 13:49:19.500379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.764 [2024-04-18 13:49:19.500409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:16.764 [2024-04-18 13:49:19.512620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.764 [2024-04-18 13:49:19.512652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.764 [2024-04-18 13:49:19.512670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:16.764 [2024-04-18 13:49:19.523645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.764 [2024-04-18 13:49:19.523679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.764 [2024-04-18 13:49:19.523698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:16.764 [2024-04-18 13:49:19.532882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.764 [2024-04-18 13:49:19.532914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.764 [2024-04-18 13:49:19.532933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.764 [2024-04-18 13:49:19.541677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.764 [2024-04-18 13:49:19.541709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.764 [2024-04-18 13:49:19.541728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:16.764 [2024-04-18 13:49:19.550433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.764 [2024-04-18 13:49:19.550459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.764 [2024-04-18 13:49:19.550491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:16.764 [2024-04-18 13:49:19.559734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:16.764 [2024-04-18 13:49:19.559767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.764 [2024-04-18 13:49:19.559786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.022 [2024-04-18 13:49:19.571833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.022 [2024-04-18 13:49:19.571867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.022 [2024-04-18 13:49:19.571886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.022 [2024-04-18 13:49:19.582698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.022 [2024-04-18 13:49:19.582731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.022 [2024-04-18 13:49:19.582749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.022 [2024-04-18 13:49:19.594940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.022 [2024-04-18 13:49:19.594974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.022 [2024-04-18 13:49:19.594999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.022 [2024-04-18 13:49:19.605782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.022 [2024-04-18 13:49:19.605816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.022 [2024-04-18 13:49:19.605835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.022 [2024-04-18 13:49:19.614930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.022 [2024-04-18 13:49:19.614962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.022 [2024-04-18 13:49:19.614980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.022 [2024-04-18 13:49:19.623719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.022 [2024-04-18 13:49:19.623752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.022 [2024-04-18 13:49:19.623771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.022 [2024-04-18 13:49:19.632366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.022 [2024-04-18 13:49:19.632394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.022 [2024-04-18 13:49:19.632426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.022 [2024-04-18 13:49:19.641448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.022 [2024-04-18 13:49:19.641476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.022 [2024-04-18 13:49:19.641492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.022 [2024-04-18 13:49:19.653308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.022 [2024-04-18 13:49:19.653335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.022 [2024-04-18 13:49:19.653366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.022 [2024-04-18 13:49:19.663569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.022 [2024-04-18 13:49:19.663602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.022 [2024-04-18 13:49:19.663620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.022 [2024-04-18 13:49:19.672489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.022 [2024-04-18 13:49:19.672522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.022 [2024-04-18 13:49:19.672541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.022 [2024-04-18 13:49:19.681509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.022 [2024-04-18 13:49:19.681542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.022 [2024-04-18 13:49:19.681560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.022 [2024-04-18 13:49:19.690376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.022 [2024-04-18 13:49:19.690403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.022 [2024-04-18 13:49:19.690433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.022 [2024-04-18 13:49:19.699289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.022 [2024-04-18 13:49:19.699317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.022 [2024-04-18 13:49:19.699347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.023 [2024-04-18 13:49:19.708099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.023 [2024-04-18 13:49:19.708132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.023 [2024-04-18 13:49:19.708151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.023 [2024-04-18 13:49:19.716805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.023 [2024-04-18 13:49:19.716837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.023 [2024-04-18 13:49:19.716855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.023 [2024-04-18 13:49:19.725736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.023 [2024-04-18 13:49:19.725769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.023 [2024-04-18 13:49:19.725787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.023 [2024-04-18 13:49:19.735988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.023 [2024-04-18 13:49:19.736021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.023 [2024-04-18 13:49:19.736040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.023 [2024-04-18 13:49:19.745219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.023 [2024-04-18 13:49:19.745246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.023 [2024-04-18 13:49:19.745261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.023 [2024-04-18 13:49:19.754245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.023 [2024-04-18 13:49:19.754271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.023 [2024-04-18 13:49:19.754307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.023 [2024-04-18 13:49:19.763355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.023 [2024-04-18 13:49:19.763381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.023 [2024-04-18 13:49:19.763411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.023 [2024-04-18 13:49:19.772557] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.023 [2024-04-18 13:49:19.772589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.023 [2024-04-18 13:49:19.772607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.023 [2024-04-18 13:49:19.781738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.023 [2024-04-18 13:49:19.781771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.023 [2024-04-18 13:49:19.781789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.023 [2024-04-18 13:49:19.790759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.023 [2024-04-18 13:49:19.790791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.023 [2024-04-18 13:49:19.790809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.023 [2024-04-18 13:49:19.799879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.023 [2024-04-18 13:49:19.799912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.023 [2024-04-18 13:49:19.799931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.023 [2024-04-18 13:49:19.809054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.023 [2024-04-18 13:49:19.809086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.023 [2024-04-18 13:49:19.809105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.023 [2024-04-18 13:49:19.818356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.023 [2024-04-18 13:49:19.818386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.023 [2024-04-18 13:49:19.818402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.023 [2024-04-18 13:49:19.827533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.023 [2024-04-18 13:49:19.827577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.023 [2024-04-18 13:49:19.827594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.287 [2024-04-18 13:49:19.837546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.287 [2024-04-18 13:49:19.837588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.287 [2024-04-18 13:49:19.837607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.287 [2024-04-18 13:49:19.846377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.287 [2024-04-18 13:49:19.846408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.287 [2024-04-18 13:49:19.846424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.287 [2024-04-18 13:49:19.855674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.287 [2024-04-18 13:49:19.855708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.288 [2024-04-18 13:49:19.855726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.288 [2024-04-18 13:49:19.864697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.288 [2024-04-18 13:49:19.864731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.288 [2024-04-18 13:49:19.864749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.288 [2024-04-18 13:49:19.873490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.288 [2024-04-18 13:49:19.873517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.288 [2024-04-18 13:49:19.873550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.288 [2024-04-18 13:49:19.882309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.288 [2024-04-18 13:49:19.882337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.288 [2024-04-18 13:49:19.882368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.288 [2024-04-18 13:49:19.890991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.288 [2024-04-18 13:49:19.891023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.288 [2024-04-18 13:49:19.891041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.288 [2024-04-18 13:49:19.899944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.288 [2024-04-18 13:49:19.899975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.288 [2024-04-18 13:49:19.899994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.289 [2024-04-18 13:49:19.908654] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.289 [2024-04-18 13:49:19.908686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.289 [2024-04-18 13:49:19.908704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.289 [2024-04-18 13:49:19.917791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.289 [2024-04-18 13:49:19.917825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.289 [2024-04-18 13:49:19.917844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.289 [2024-04-18 13:49:19.927874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.289 [2024-04-18 13:49:19.927908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.289 [2024-04-18 13:49:19.927928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.289 [2024-04-18 13:49:19.937492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.289 [2024-04-18 13:49:19.937539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.289 [2024-04-18 13:49:19.937558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.289 [2024-04-18 13:49:19.946816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.289 [2024-04-18 13:49:19.946849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.289 [2024-04-18 13:49:19.946868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.289 [2024-04-18 13:49:19.955866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.289 [2024-04-18 13:49:19.955899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.289 [2024-04-18 13:49:19.955917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.289 [2024-04-18 13:49:19.965104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.289 [2024-04-18 13:49:19.965138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.289 [2024-04-18 13:49:19.965156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.289 [2024-04-18 13:49:19.973978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.289 [2024-04-18 13:49:19.974011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.289 [2024-04-18 13:49:19.974029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.289 [2024-04-18 13:49:19.982758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.290 [2024-04-18 13:49:19.982791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.290 [2024-04-18 13:49:19.982809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.290 [2024-04-18 13:49:19.991349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.290 [2024-04-18 13:49:19.991376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.290 [2024-04-18 13:49:19.991415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.290 [2024-04-18 13:49:20.000503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.290 [2024-04-18 13:49:20.000549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.290 [2024-04-18 13:49:20.000567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.290 [2024-04-18 13:49:20.009641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.290 [2024-04-18 13:49:20.009676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.290 [2024-04-18 13:49:20.009708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.290 [2024-04-18 13:49:20.019290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.291 [2024-04-18 13:49:20.019339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.291 [2024-04-18 13:49:20.019357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.291 [2024-04-18 13:49:20.028204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.291 [2024-04-18 13:49:20.028253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.291 [2024-04-18 13:49:20.028271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.291 [2024-04-18 13:49:20.037450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.291 [2024-04-18 13:49:20.037490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.291 [2024-04-18 13:49:20.037509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.291 [2024-04-18 13:49:20.047006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.291 [2024-04-18 13:49:20.047042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.291 [2024-04-18 13:49:20.047061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.291 [2024-04-18 13:49:20.055936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.291 [2024-04-18 13:49:20.055969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.291 [2024-04-18 13:49:20.055988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.291 [2024-04-18 13:49:20.064683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.291 [2024-04-18 13:49:20.064715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.291 [2024-04-18 13:49:20.064734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.291 [2024-04-18 13:49:20.074299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.291 [2024-04-18 13:49:20.074351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.291 [2024-04-18 13:49:20.074370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.291 [2024-04-18 13:49:20.083814] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.292 [2024-04-18 13:49:20.083847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.292 [2024-04-18 13:49:20.083866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.093575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.093611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.093631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.102408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.102438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.102455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.111168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.111224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.111243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.120199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.120233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.120263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.129786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.129820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.129839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.138765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.138799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.138818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.147753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.147787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.147805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.156830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.156865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.156884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.166031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.166065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.166085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.175092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.175126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.175146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.183912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.183946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.183965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.193123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.193157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.193184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.201869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.201902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.201921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.211642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.211677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.211696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.221103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.221136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.221155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.229901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.229939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.229959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.239246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.239276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.239292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.247999] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.248032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.248052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.257259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.257289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.257306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.266333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.266362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.266378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.275676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.275710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.275730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.284848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.284881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.284900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.558 [2024-04-18 13:49:20.294196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.558 [2024-04-18 13:49:20.294240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.558 [2024-04-18 13:49:20.294255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.559 [2024-04-18 13:49:20.303312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.559 [2024-04-18 13:49:20.303340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.559 [2024-04-18 13:49:20.303357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.559 [2024-04-18 13:49:20.312102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.559 [2024-04-18 13:49:20.312135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.559 [2024-04-18 13:49:20.312154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.559 [2024-04-18 13:49:20.321270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.559 [2024-04-18 13:49:20.321300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.559 [2024-04-18 13:49:20.321317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.559 [2024-04-18 13:49:20.330430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.559 [2024-04-18 13:49:20.330475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.559 [2024-04-18 13:49:20.330491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.559 [2024-04-18 13:49:20.339720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.559 [2024-04-18 13:49:20.339754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.559 [2024-04-18 13:49:20.339773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.559 [2024-04-18 13:49:20.348551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.559 [2024-04-18 13:49:20.348584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.559 [2024-04-18 13:49:20.348603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.559 [2024-04-18 13:49:20.357482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.559 [2024-04-18 13:49:20.357515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.559 [2024-04-18 13:49:20.357534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.817 [2024-04-18 13:49:20.366997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.817 [2024-04-18 13:49:20.367031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.817 [2024-04-18 13:49:20.367050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.817 [2024-04-18 13:49:20.375900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.817 [2024-04-18 13:49:20.375932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.817 [2024-04-18 13:49:20.375951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.817 [2024-04-18 13:49:20.384734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.817 [2024-04-18 13:49:20.384767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.817 [2024-04-18 13:49:20.384793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.817 [2024-04-18 13:49:20.393488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.817 [2024-04-18 13:49:20.393517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.817 [2024-04-18 13:49:20.393547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.817 [2024-04-18 13:49:20.402294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.817 [2024-04-18 13:49:20.402324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.817 [2024-04-18 13:49:20.402340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.817 [2024-04-18 13:49:20.411098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.817 [2024-04-18 13:49:20.411131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.817 [2024-04-18 13:49:20.411150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.817 [2024-04-18 13:49:20.419740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.817 [2024-04-18 13:49:20.419774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.817 [2024-04-18 13:49:20.419793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.817 [2024-04-18 13:49:20.429271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.817 [2024-04-18 13:49:20.429299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.817 [2024-04-18 13:49:20.429316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.817 [2024-04-18 13:49:20.438291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.817 [2024-04-18 13:49:20.438319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.817 [2024-04-18 13:49:20.438334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.817 [2024-04-18 13:49:20.447604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.817 [2024-04-18 13:49:20.447639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.817 [2024-04-18 13:49:20.447659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.817 [2024-04-18 13:49:20.457402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.817 [2024-04-18 13:49:20.457431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.817 [2024-04-18 13:49:20.457446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.817 [2024-04-18 13:49:20.466292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.817 [2024-04-18 13:49:20.466327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.817 [2024-04-18 13:49:20.466344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.475093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.475127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.475146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.483878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.483911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.483930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.493409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.493439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.493457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.502701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.502735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.502755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.511486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.511533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.511552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.520252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.520279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.520295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.529411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.529439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.529455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.538339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.538367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.538383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.547312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.547339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.547354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.556625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.556659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.556678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.565560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.565593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.565612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.574769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.574803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.574822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.583537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.583570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.583588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.592171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.592226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.592242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.600825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.600857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.600876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.609667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.609699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.609718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.818 [2024-04-18 13:49:20.619435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:17.818 [2024-04-18 13:49:20.619480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.818 [2024-04-18 13:49:20.619502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.076 [2024-04-18 13:49:20.628486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.076 [2024-04-18 13:49:20.628514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.076 [2024-04-18 13:49:20.628546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.076 [2024-04-18 13:49:20.637340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.076 [2024-04-18 13:49:20.637367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.076 [2024-04-18 13:49:20.637384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.076 [2024-04-18 13:49:20.646114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.076 [2024-04-18 13:49:20.646146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.076 [2024-04-18 13:49:20.646165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.076 [2024-04-18 13:49:20.655620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.076 [2024-04-18 13:49:20.655655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.076 [2024-04-18 13:49:20.655675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.076 [2024-04-18 13:49:20.664536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.076 [2024-04-18 13:49:20.664570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.664588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.673667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.673702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.673722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.683027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.683061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.683080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.692307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.692338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.692356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.701488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.701516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.701550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.710198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.710244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.710260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.719229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.719264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.719281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.728144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.728186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.728222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.736893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.736927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.736945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.745391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.745419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.745435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.755680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.755714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.755733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.765098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.765132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.765151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.774371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.774399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.774421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.783417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.783445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.783462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.792043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.792077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.792096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.801232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.801275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.801290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.811798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.811831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.811850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.822472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.822514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.822530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.833310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.833341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.833359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.843062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.843096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.843115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.852858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.852892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.852911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.864269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.077 [2024-04-18 13:49:20.864305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.077 [2024-04-18 13:49:20.864339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.077 [2024-04-18 13:49:20.874110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.078 [2024-04-18 13:49:20.874144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.078 [2024-04-18 13:49:20.874163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.335 [2024-04-18 13:49:20.883953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.335 [2024-04-18 13:49:20.883987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.335 [2024-04-18 13:49:20.884007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.335 [2024-04-18 13:49:20.893877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.335 [2024-04-18 13:49:20.893912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.335 [2024-04-18 13:49:20.893932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.335 [2024-04-18 13:49:20.903655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.335 [2024-04-18 13:49:20.903691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.335 [2024-04-18 13:49:20.903710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.335 [2024-04-18 13:49:20.913627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.335 [2024-04-18 13:49:20.913661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.335 [2024-04-18 13:49:20.913679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.335 [2024-04-18 13:49:20.923372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.335 [2024-04-18 13:49:20.923402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.335 [2024-04-18 13:49:20.923419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.335 [2024-04-18 13:49:20.933087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:20.933120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:20.933140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:20.942845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:20.942877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:20.942897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:20.952624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:20.952658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:20.952677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:20.962250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:20.962280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:20.962296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:20.972283] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:20.972311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:20.972327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:20.983025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:20.983058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:20.983076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:20.994088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:20.994122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:20.994142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:21.006343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:21.006370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:21.006386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:21.018454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:21.018482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:21.018498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:21.030817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:21.030850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:21.030869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:21.043429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:21.043474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:21.043499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:21.056685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:21.056719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:21.056738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:21.069816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:21.069850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:21.069869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:21.082809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:21.082844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:21.082864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:21.096241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:21.096271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:21.096286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:21.109464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:21.109515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:21.109530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:21.123151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:21.123193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:21.123228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.336 [2024-04-18 13:49:21.137013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.336 [2024-04-18 13:49:21.137047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.336 [2024-04-18 13:49:21.137066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.593 [2024-04-18 13:49:21.150882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.593 [2024-04-18 13:49:21.150916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.593 [2024-04-18 13:49:21.150936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.593 [2024-04-18 13:49:21.164522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.593 [2024-04-18 13:49:21.164554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.593 [2024-04-18 13:49:21.164588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.593 [2024-04-18 13:49:21.179088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8778e0) 00:20:18.593 [2024-04-18 13:49:21.179122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.593 [2024-04-18 13:49:21.179141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.593 00:20:18.593 Latency(us) 00:20:18.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.593 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:18.593 nvme0n1 : 2.00 3089.00 386.13 0.00 0.00 5175.60 4029.25 15728.64 00:20:18.593 =================================================================================================================== 00:20:18.593 Total : 3089.00 386.13 0.00 0.00 5175.60 4029.25 15728.64 00:20:18.593 0 00:20:18.593 13:49:21 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:18.593 13:49:21 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:18.593 13:49:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:18.593 13:49:21 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:18.593 | .driver_specific 00:20:18.593 | .nvme_error 00:20:18.593 | .status_code 00:20:18.593 | .command_transient_transport_error' 00:20:18.850 13:49:21 -- host/digest.sh@71 -- # (( 199 > 0 )) 00:20:18.850 13:49:21 -- host/digest.sh@73 -- # killprocess 2675622 00:20:18.850 13:49:21 -- common/autotest_common.sh@936 -- # '[' -z 2675622 ']' 00:20:18.850 13:49:21 -- common/autotest_common.sh@940 -- # kill -0 2675622 00:20:18.850 13:49:21 -- common/autotest_common.sh@941 -- # uname 00:20:18.850 13:49:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:18.850 13:49:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2675622 00:20:18.850 13:49:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:18.850 13:49:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:18.850 13:49:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2675622' 00:20:18.850 killing process with pid 2675622 00:20:18.850 13:49:21 -- common/autotest_common.sh@955 -- # kill 2675622 00:20:18.850 Received shutdown signal, test time was about 2.000000 seconds 00:20:18.850 00:20:18.850 Latency(us) 00:20:18.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.850 =================================================================================================================== 00:20:18.850 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.850 13:49:21 -- common/autotest_common.sh@960 -- # wait 2675622 00:20:19.107 13:49:21 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:19.107 13:49:21 -- host/digest.sh@54 -- # local rw bs qd 00:20:19.107 13:49:21 -- host/digest.sh@56 -- # rw=randwrite 00:20:19.107 13:49:21 -- host/digest.sh@56 -- # bs=4096 00:20:19.107 13:49:21 -- host/digest.sh@56 -- # qd=128 00:20:19.107 13:49:21 -- host/digest.sh@58 -- # bperfpid=2676032 00:20:19.107 13:49:21 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:19.107 13:49:21 -- host/digest.sh@60 -- # waitforlisten 2676032 /var/tmp/bperf.sock 00:20:19.107 13:49:21 -- common/autotest_common.sh@817 -- # '[' -z 2676032 ']' 00:20:19.107 13:49:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:19.107 13:49:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:19.107 13:49:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:19.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:19.107 13:49:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:19.107 13:49:21 -- common/autotest_common.sh@10 -- # set +x 00:20:19.107 [2024-04-18 13:49:21.795073] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:19.108 [2024-04-18 13:49:21.795151] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2676032 ] 00:20:19.108 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.108 [2024-04-18 13:49:21.854039] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.365 [2024-04-18 13:49:21.961080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.365 13:49:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:19.365 13:49:22 -- common/autotest_common.sh@850 -- # return 0 00:20:19.365 13:49:22 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:19.365 13:49:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:19.623 13:49:22 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:19.623 13:49:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.623 13:49:22 -- common/autotest_common.sh@10 -- # set +x 00:20:19.623 13:49:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.623 13:49:22 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:19.623 13:49:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:20.188 nvme0n1 00:20:20.188 13:49:22 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:20.188 13:49:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.188 13:49:22 -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 13:49:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.188 13:49:22 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:20.188 13:49:22 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:20.188 Running I/O for 2 seconds... 00:20:20.188 [2024-04-18 13:49:22.941656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.188 [2024-04-18 13:49:22.942026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.188 [2024-04-18 13:49:22.942072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.188 [2024-04-18 13:49:22.955969] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.188 [2024-04-18 13:49:22.956314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.188 [2024-04-18 13:49:22.956344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.188 [2024-04-18 13:49:22.970226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.188 [2024-04-18 13:49:22.970483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.188 [2024-04-18 13:49:22.970516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.188 [2024-04-18 13:49:22.984417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.188 [2024-04-18 13:49:22.984700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.188 [2024-04-18 13:49:22.984732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.444 [2024-04-18 13:49:22.998899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.444 [2024-04-18 13:49:22.999159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.444 [2024-04-18 13:49:22.999200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.444 [2024-04-18 13:49:23.012923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.444 [2024-04-18 13:49:23.013257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.444 [2024-04-18 13:49:23.013283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.444 [2024-04-18 13:49:23.026914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.444 [2024-04-18 13:49:23.027251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.444 [2024-04-18 13:49:23.027278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.444 [2024-04-18 13:49:23.041064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.444 [2024-04-18 13:49:23.041358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.444 [2024-04-18 13:49:23.041385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.444 [2024-04-18 13:49:23.055125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.444 [2024-04-18 13:49:23.055403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.444 [2024-04-18 13:49:23.055429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.444 [2024-04-18 13:49:23.069090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.444 [2024-04-18 13:49:23.069375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.444 [2024-04-18 13:49:23.069401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.444 [2024-04-18 13:49:23.083099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.444 [2024-04-18 13:49:23.083371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.444 [2024-04-18 13:49:23.083411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.444 [2024-04-18 13:49:23.097265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.444 [2024-04-18 13:49:23.097569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.444 [2024-04-18 13:49:23.097600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.444 [2024-04-18 13:49:23.111607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.444 [2024-04-18 13:49:23.111841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.444 [2024-04-18 13:49:23.111873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.444 [2024-04-18 13:49:23.125744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.444 [2024-04-18 13:49:23.125993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.444 [2024-04-18 13:49:23.126025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.444 [2024-04-18 13:49:23.139947] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.444 [2024-04-18 13:49:23.140202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.444 [2024-04-18 13:49:23.140244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.444 [2024-04-18 13:49:23.154299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.444 [2024-04-18 13:49:23.154562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.444 [2024-04-18 13:49:23.154594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.444 [2024-04-18 13:49:23.168431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.444 [2024-04-18 13:49:23.168686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.444 [2024-04-18 13:49:23.168717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.445 [2024-04-18 13:49:23.182623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.445 [2024-04-18 13:49:23.182874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.445 [2024-04-18 13:49:23.182906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.445 [2024-04-18 13:49:23.196763] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.445 [2024-04-18 13:49:23.197055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.445 [2024-04-18 13:49:23.197081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.445 [2024-04-18 13:49:23.210782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.445 [2024-04-18 13:49:23.211085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.445 [2024-04-18 13:49:23.211116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.445 [2024-04-18 13:49:23.224945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.445 [2024-04-18 13:49:23.225232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.445 [2024-04-18 13:49:23.225278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.445 [2024-04-18 13:49:23.239051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.445 [2024-04-18 13:49:23.239319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.445 [2024-04-18 13:49:23.239345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.253309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.253610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.253641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.267424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.267695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.267725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.281409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.281685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.281716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.295586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.295913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.295944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.309651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.309991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.310022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.323814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.324161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.324201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.337990] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.338266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.338294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.352195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.352457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.352482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.366309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.366565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.366597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.380430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.380722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.380753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.394696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.394970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.395002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.408916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.409259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.409286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.423071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.423363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.423388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.437153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.437446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.437473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.451351] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.451638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.451664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.465592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.465920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.465952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.479760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.480093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.480125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.493800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.494081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.494112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.702 [2024-04-18 13:49:23.508113] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.702 [2024-04-18 13:49:23.508362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.702 [2024-04-18 13:49:23.508404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.959 [2024-04-18 13:49:23.522550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.959 [2024-04-18 13:49:23.522819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.959 [2024-04-18 13:49:23.522850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.959 [2024-04-18 13:49:23.536817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.959 [2024-04-18 13:49:23.537088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.959 [2024-04-18 13:49:23.537119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.959 [2024-04-18 13:49:23.550915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.959 [2024-04-18 13:49:23.551194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.959 [2024-04-18 13:49:23.551240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.959 [2024-04-18 13:49:23.564239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.959 [2024-04-18 13:49:23.564507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.959 [2024-04-18 13:49:23.564548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.959 [2024-04-18 13:49:23.577051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.959 [2024-04-18 13:49:23.577294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.959 [2024-04-18 13:49:23.577321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.960 [2024-04-18 13:49:23.589888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.960 [2024-04-18 13:49:23.590139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.960 [2024-04-18 13:49:23.590193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.960 [2024-04-18 13:49:23.602410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.960 [2024-04-18 13:49:23.602743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.960 [2024-04-18 13:49:23.602770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.960 [2024-04-18 13:49:23.614830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.960 [2024-04-18 13:49:23.615174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.960 [2024-04-18 13:49:23.615208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.960 [2024-04-18 13:49:23.627371] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.960 [2024-04-18 13:49:23.627646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.960 [2024-04-18 13:49:23.627672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.960 [2024-04-18 13:49:23.639854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.960 [2024-04-18 13:49:23.640154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.960 [2024-04-18 13:49:23.640202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.960 [2024-04-18 13:49:23.652320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.960 [2024-04-18 13:49:23.652581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.960 [2024-04-18 13:49:23.652608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.960 [2024-04-18 13:49:23.664746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.960 [2024-04-18 13:49:23.664985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.960 [2024-04-18 13:49:23.665011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.960 [2024-04-18 13:49:23.677257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.960 [2024-04-18 13:49:23.677513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.960 [2024-04-18 13:49:23.677539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.960 [2024-04-18 13:49:23.689829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.960 [2024-04-18 13:49:23.690081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.960 [2024-04-18 13:49:23.690107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.960 [2024-04-18 13:49:23.702302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.960 [2024-04-18 13:49:23.702634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.960 [2024-04-18 13:49:23.702662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.960 [2024-04-18 13:49:23.715255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.960 [2024-04-18 13:49:23.715482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.960 [2024-04-18 13:49:23.715508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.960 [2024-04-18 13:49:23.727837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.960 [2024-04-18 13:49:23.728087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.960 [2024-04-18 13:49:23.728113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.960 [2024-04-18 13:49:23.740311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.960 [2024-04-18 13:49:23.740641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.960 [2024-04-18 13:49:23.740667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.960 [2024-04-18 13:49:23.752740] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.960 [2024-04-18 13:49:23.752984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.960 [2024-04-18 13:49:23.753012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.960 [2024-04-18 13:49:23.765587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:20.960 [2024-04-18 13:49:23.765865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.960 [2024-04-18 13:49:23.765906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.778354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.778631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.778657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.790938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.791196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.791225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.803376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.803712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.803738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.815793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.816005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.816031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.828264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.828516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.828542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.840723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.840992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.841019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.853261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.853575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.853601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.865694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.865941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.865967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.878050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.878325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.878352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.890618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.890865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.890891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.903007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.903265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.903293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.915610] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.915857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.915884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.927986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.928234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.928261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.940479] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.940749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.940776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.952980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.953263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.953303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.965977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.966237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.966266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.978543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.978792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.978818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:23.991024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:23.991302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:23.991331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:24.003558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:24.003807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:24.003834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.218 [2024-04-18 13:49:24.015924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.218 [2024-04-18 13:49:24.016192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.218 [2024-04-18 13:49:24.016220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.029205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.029477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.029511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.041693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.041947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.041973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.054134] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.054428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.054455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.066587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.066833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.066859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.079001] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.079248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.079276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.091659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.091907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.091933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.104048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.104385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.104413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.116640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.116947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.116973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.128982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.129238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.129265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.141337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.141589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.141615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.153642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.153873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.153899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.166038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.166280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.166307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.178490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.178727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.178753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.191113] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.191352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.191380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.203571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.203829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.203856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.216359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.216610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.216636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.228708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.228935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.228960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.240939] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.476 [2024-04-18 13:49:24.241191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.476 [2024-04-18 13:49:24.241229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.476 [2024-04-18 13:49:24.253456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.477 [2024-04-18 13:49:24.253774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.477 [2024-04-18 13:49:24.253800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.477 [2024-04-18 13:49:24.265862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.477 [2024-04-18 13:49:24.266108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.477 [2024-04-18 13:49:24.266134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.477 [2024-04-18 13:49:24.278425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.477 [2024-04-18 13:49:24.278698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.477 [2024-04-18 13:49:24.278724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.736 [2024-04-18 13:49:24.291846] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.736 [2024-04-18 13:49:24.292112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.736 [2024-04-18 13:49:24.292138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.736 [2024-04-18 13:49:24.304485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.736 [2024-04-18 13:49:24.304719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.736 [2024-04-18 13:49:24.304744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.736 [2024-04-18 13:49:24.316873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.736 [2024-04-18 13:49:24.317118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.736 [2024-04-18 13:49:24.317144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.736 [2024-04-18 13:49:24.329394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.736 [2024-04-18 13:49:24.329641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.736 [2024-04-18 13:49:24.329666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.736 [2024-04-18 13:49:24.341949] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.736 [2024-04-18 13:49:24.342198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.736 [2024-04-18 13:49:24.342224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.736 [2024-04-18 13:49:24.354974] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.736 [2024-04-18 13:49:24.355238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.736 [2024-04-18 13:49:24.355269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.736 [2024-04-18 13:49:24.369063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.736 [2024-04-18 13:49:24.369313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.737 [2024-04-18 13:49:24.369340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.737 [2024-04-18 13:49:24.383153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.737 [2024-04-18 13:49:24.383384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.737 [2024-04-18 13:49:24.383410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.737 [2024-04-18 13:49:24.397297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.737 [2024-04-18 13:49:24.397575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.737 [2024-04-18 13:49:24.397606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.737 [2024-04-18 13:49:24.411445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.737 [2024-04-18 13:49:24.411770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.737 [2024-04-18 13:49:24.411800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.737 [2024-04-18 13:49:24.425618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.737 [2024-04-18 13:49:24.425920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.737 [2024-04-18 13:49:24.425951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.737 [2024-04-18 13:49:24.439902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.737 [2024-04-18 13:49:24.440202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.737 [2024-04-18 13:49:24.440245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.737 [2024-04-18 13:49:24.454121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.737 [2024-04-18 13:49:24.454432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.737 [2024-04-18 13:49:24.454475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.737 [2024-04-18 13:49:24.468394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.737 [2024-04-18 13:49:24.468724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.737 [2024-04-18 13:49:24.468755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.737 [2024-04-18 13:49:24.482612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.737 [2024-04-18 13:49:24.482879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.737 [2024-04-18 13:49:24.482911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.737 [2024-04-18 13:49:24.496767] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.737 [2024-04-18 13:49:24.497094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.737 [2024-04-18 13:49:24.497125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.737 [2024-04-18 13:49:24.510857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.737 [2024-04-18 13:49:24.511124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.737 [2024-04-18 13:49:24.511155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.737 [2024-04-18 13:49:24.525380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.737 [2024-04-18 13:49:24.525743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.737 [2024-04-18 13:49:24.525775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.737 [2024-04-18 13:49:24.539920] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.737 [2024-04-18 13:49:24.540156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.737 [2024-04-18 13:49:24.540194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.554342] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.554684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.554715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.568478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.568730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.568761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.582564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.582805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.582837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.596650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.596981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.597012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.610723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.611059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.611090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.624836] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.625183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.625215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.638904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.639194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.639238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.653000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.653279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.653306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.667139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.667485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.667529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.681166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.681519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.681550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.695226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.695548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.695579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.709325] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.709594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.709636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.723495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.723786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.723822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.737867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.738221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.738247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.752038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.752391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.752417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.766105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.766450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.766492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.780206] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.780471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.780513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:21.996 [2024-04-18 13:49:24.794278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:21.996 [2024-04-18 13:49:24.794544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:21.996 [2024-04-18 13:49:24.794577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:22.255 [2024-04-18 13:49:24.808505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:22.255 [2024-04-18 13:49:24.808787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.255 [2024-04-18 13:49:24.808819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:22.255 [2024-04-18 13:49:24.822600] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:22.255 [2024-04-18 13:49:24.822882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.255 [2024-04-18 13:49:24.822914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:22.255 [2024-04-18 13:49:24.836671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:22.255 [2024-04-18 13:49:24.836955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.255 [2024-04-18 13:49:24.836987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:22.255 [2024-04-18 13:49:24.850768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:22.255 [2024-04-18 13:49:24.851056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.255 [2024-04-18 13:49:24.851087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:22.255 [2024-04-18 13:49:24.864838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:22.255 [2024-04-18 13:49:24.865186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.255 [2024-04-18 13:49:24.865230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:22.255 [2024-04-18 13:49:24.878878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:22.255 [2024-04-18 13:49:24.879157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.255 [2024-04-18 13:49:24.879196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:22.255 [2024-04-18 13:49:24.893023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:22.255 [2024-04-18 13:49:24.893366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.255 [2024-04-18 13:49:24.893392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:22.255 [2024-04-18 13:49:24.907097] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:22.255 [2024-04-18 13:49:24.907405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.255 [2024-04-18 13:49:24.907432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:22.255 [2024-04-18 13:49:24.921229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2229d30) with pdu=0x2000190fef90 00:20:22.255 [2024-04-18 13:49:24.921481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.255 [2024-04-18 13:49:24.921507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:22.255 00:20:22.255 Latency(us) 00:20:22.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.255 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:22.255 nvme0n1 : 2.01 18890.54 73.79 0.00 0.00 6760.38 5995.33 15922.82 00:20:22.255 =================================================================================================================== 00:20:22.255 Total : 18890.54 73.79 0.00 0.00 6760.38 5995.33 15922.82 00:20:22.255 0 00:20:22.255 13:49:24 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:22.255 13:49:24 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:22.255 13:49:24 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:22.255 | .driver_specific 00:20:22.255 | .nvme_error 00:20:22.255 | .status_code 00:20:22.255 | .command_transient_transport_error' 00:20:22.255 13:49:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:22.513 13:49:25 -- host/digest.sh@71 -- # (( 148 > 0 )) 00:20:22.513 13:49:25 -- host/digest.sh@73 -- # killprocess 2676032 00:20:22.513 13:49:25 -- common/autotest_common.sh@936 -- # '[' -z 2676032 ']' 00:20:22.513 13:49:25 -- common/autotest_common.sh@940 -- # kill -0 2676032 00:20:22.513 13:49:25 -- common/autotest_common.sh@941 -- # uname 00:20:22.513 13:49:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:22.513 13:49:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2676032 00:20:22.513 13:49:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:22.513 13:49:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:22.513 13:49:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2676032' 00:20:22.513 killing process with pid 2676032 00:20:22.513 13:49:25 -- common/autotest_common.sh@955 -- # kill 2676032 00:20:22.513 Received shutdown signal, test time was about 2.000000 seconds 00:20:22.513 00:20:22.513 Latency(us) 00:20:22.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.513 =================================================================================================================== 00:20:22.513 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:22.513 13:49:25 -- common/autotest_common.sh@960 -- # wait 2676032 00:20:22.771 13:49:25 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:22.771 13:49:25 -- host/digest.sh@54 -- # local rw bs qd 00:20:22.771 13:49:25 -- host/digest.sh@56 -- # rw=randwrite 00:20:22.771 13:49:25 -- host/digest.sh@56 -- # bs=131072 00:20:22.771 13:49:25 -- host/digest.sh@56 -- # qd=16 00:20:22.771 13:49:25 -- host/digest.sh@58 -- # bperfpid=2676514 00:20:22.771 13:49:25 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:22.771 13:49:25 -- host/digest.sh@60 -- # waitforlisten 2676514 /var/tmp/bperf.sock 00:20:22.771 13:49:25 -- common/autotest_common.sh@817 -- # '[' -z 2676514 ']' 00:20:22.771 13:49:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:22.771 13:49:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:22.771 13:49:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:22.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:22.771 13:49:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:22.771 13:49:25 -- common/autotest_common.sh@10 -- # set +x 00:20:22.771 [2024-04-18 13:49:25.526313] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:22.771 [2024-04-18 13:49:25.526393] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2676514 ] 00:20:22.771 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:22.771 Zero copy mechanism will not be used. 00:20:22.771 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.029 [2024-04-18 13:49:25.592785] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.029 [2024-04-18 13:49:25.706655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.029 13:49:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:23.029 13:49:25 -- common/autotest_common.sh@850 -- # return 0 00:20:23.029 13:49:25 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:23.029 13:49:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:23.595 13:49:26 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:23.595 13:49:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.595 13:49:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.595 13:49:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.595 13:49:26 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:23.595 13:49:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:23.855 nvme0n1 00:20:23.855 13:49:26 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:23.855 13:49:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.855 13:49:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.855 13:49:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.855 13:49:26 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:23.855 13:49:26 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:24.114 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:24.114 Zero copy mechanism will not be used. 00:20:24.114 Running I/O for 2 seconds... 00:20:24.114 [2024-04-18 13:49:26.744298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.744657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.744710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.753446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.753813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.753847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.761698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.762042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.762076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.771188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.771510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.771543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.780289] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.780618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.780651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.790401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.790747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.790780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.800304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.800746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.800780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.811446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.811757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.811793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.821805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.821971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.821998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.831680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.832057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.832084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.841525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.841936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.841969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.850567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.850907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.850940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.859166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.859478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.859522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.867955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.868292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.868320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.876632] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.876962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.876995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.885485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.885812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.885845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.894168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.894508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.894541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.903092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.903402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.903430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.911535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.114 [2024-04-18 13:49:26.911865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.114 [2024-04-18 13:49:26.911898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.114 [2024-04-18 13:49:26.920611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:26.920947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:26.920981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:26.929473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:26.929810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:26.929843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:26.938075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:26.938388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:26.938416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:26.947243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:26.947541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:26.947574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:26.955262] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:26.955567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:26.955601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:26.964269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:26.964614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:26.964647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:26.973034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:26.973373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:26.973400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:26.981630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:26.981961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:26.981994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:26.990111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:26.990403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:26.990431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:26.998864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:26.999184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:26.999230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:27.007664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:27.007978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:27.008012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:27.016477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:27.016794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:27.016827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:27.025163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:27.025484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:27.025512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:27.033768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:27.034078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:27.034111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:27.042264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:27.042553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:27.042595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:27.050373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:27.050749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:27.050782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:27.058807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:27.059128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:27.059161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:27.067130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:27.067429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:27.067456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:27.075898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:27.076233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:27.076261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:27.084416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:27.084773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:27.084805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:27.093115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:27.093399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:27.093426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:27.101774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:27.102155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:27.102197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:27.110856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.373 [2024-04-18 13:49:27.111171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.373 [2024-04-18 13:49:27.111225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.373 [2024-04-18 13:49:27.119332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.374 [2024-04-18 13:49:27.119676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.374 [2024-04-18 13:49:27.119709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.374 [2024-04-18 13:49:27.128140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.374 [2024-04-18 13:49:27.128439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.374 [2024-04-18 13:49:27.128467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.374 [2024-04-18 13:49:27.136789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.374 [2024-04-18 13:49:27.137109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.374 [2024-04-18 13:49:27.137141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.374 [2024-04-18 13:49:27.145203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.374 [2024-04-18 13:49:27.145480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.374 [2024-04-18 13:49:27.145527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.374 [2024-04-18 13:49:27.153972] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.374 [2024-04-18 13:49:27.154300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.374 [2024-04-18 13:49:27.154327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.374 [2024-04-18 13:49:27.162646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.374 [2024-04-18 13:49:27.162960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.374 [2024-04-18 13:49:27.162993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.374 [2024-04-18 13:49:27.171103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.374 [2024-04-18 13:49:27.171416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.374 [2024-04-18 13:49:27.171443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.631 [2024-04-18 13:49:27.180145] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.631 [2024-04-18 13:49:27.180481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.631 [2024-04-18 13:49:27.180527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.631 [2024-04-18 13:49:27.188612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.631 [2024-04-18 13:49:27.188921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.631 [2024-04-18 13:49:27.188954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.631 [2024-04-18 13:49:27.197195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.631 [2024-04-18 13:49:27.197532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.631 [2024-04-18 13:49:27.197564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.631 [2024-04-18 13:49:27.206571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.631 [2024-04-18 13:49:27.206926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.631 [2024-04-18 13:49:27.206958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.631 [2024-04-18 13:49:27.216137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.631 [2024-04-18 13:49:27.216561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.631 [2024-04-18 13:49:27.216595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.631 [2024-04-18 13:49:27.225892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.631 [2024-04-18 13:49:27.226260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.631 [2024-04-18 13:49:27.226288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.631 [2024-04-18 13:49:27.234882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.631 [2024-04-18 13:49:27.235142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.631 [2024-04-18 13:49:27.235197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.631 [2024-04-18 13:49:27.242741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.631 [2024-04-18 13:49:27.243062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.631 [2024-04-18 13:49:27.243089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.631 [2024-04-18 13:49:27.250973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.631 [2024-04-18 13:49:27.251294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.631 [2024-04-18 13:49:27.251324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.631 [2024-04-18 13:49:27.258813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.631 [2024-04-18 13:49:27.259074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.631 [2024-04-18 13:49:27.259101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.631 [2024-04-18 13:49:27.267575] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.631 [2024-04-18 13:49:27.267908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.631 [2024-04-18 13:49:27.267936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.631 [2024-04-18 13:49:27.275850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.631 [2024-04-18 13:49:27.276134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.631 [2024-04-18 13:49:27.276192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.631 [2024-04-18 13:49:27.283346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.631 [2024-04-18 13:49:27.283626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.283654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.291243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.291531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.291559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.298973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.299311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.299339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.306706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.306970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.306998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.314491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.314825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.314851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.322511] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.322775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.322802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.330574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.330841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.330869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.338326] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.338631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.338657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.346284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.346565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.346593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.354281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.354582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.354608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.362400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.362682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.362710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.370190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.370472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.370505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.379046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.379438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.379476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.387555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.387854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.387883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.395080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.395432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.395486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.403256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.403643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.403686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.411948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.412245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.412284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.420429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.420709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.420736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.428329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.428638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.428665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.632 [2024-04-18 13:49:27.436522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.632 [2024-04-18 13:49:27.436864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.632 [2024-04-18 13:49:27.436905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.891 [2024-04-18 13:49:27.444648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.891 [2024-04-18 13:49:27.444938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.891 [2024-04-18 13:49:27.444965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.891 [2024-04-18 13:49:27.453497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.891 [2024-04-18 13:49:27.453823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.891 [2024-04-18 13:49:27.453851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.891 [2024-04-18 13:49:27.462923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.891 [2024-04-18 13:49:27.463309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.891 [2024-04-18 13:49:27.463337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.891 [2024-04-18 13:49:27.472331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.891 [2024-04-18 13:49:27.472708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.891 [2024-04-18 13:49:27.472734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.891 [2024-04-18 13:49:27.481770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.891 [2024-04-18 13:49:27.482094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.891 [2024-04-18 13:49:27.482130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.891 [2024-04-18 13:49:27.490585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.891 [2024-04-18 13:49:27.490966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.891 [2024-04-18 13:49:27.491004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.891 [2024-04-18 13:49:27.499796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.891 [2024-04-18 13:49:27.500137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.891 [2024-04-18 13:49:27.500195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.891 [2024-04-18 13:49:27.508134] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.891 [2024-04-18 13:49:27.508500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.891 [2024-04-18 13:49:27.508529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.891 [2024-04-18 13:49:27.516537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.516805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.516832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.524443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.524728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.524755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.532732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.533014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.533042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.541053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.541382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.541409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.550044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.550445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.550474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.558285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.558564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.558590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.566694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.566963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.566990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.574973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.575260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.575288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.583067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.583365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.583393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.590735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.591030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.591057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.598579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.598882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.598909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.606782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.607045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.607073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.614837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.615221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.615260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.622573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.622837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.622873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.630984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.631316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.631344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.639150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.639470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.639497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.647439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.647731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.647757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.655119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.655425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.655453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.662609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.662882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.662908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.670722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.670980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.671007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.678544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.678837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.678863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.686612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.686967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.686995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.892 [2024-04-18 13:49:27.694946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:24.892 [2024-04-18 13:49:27.695280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.892 [2024-04-18 13:49:27.695309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.702860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.703134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.703185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.711218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.711508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.711535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.719126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.719439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.719481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.727148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.727488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.727515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.735236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.735539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.735565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.743285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.743620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.743656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.751306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.751627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.751652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.759677] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.759978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.760007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.768055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.768408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.768437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.775819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.776086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.776113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.784140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.784439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.784492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.792578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.792844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.792870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.800741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.801020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.801046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.808848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.809123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.809150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.816961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.817281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.817317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.824816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.825084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.825110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.832666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.832958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.832993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.841188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.151 [2024-04-18 13:49:27.841506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.151 [2024-04-18 13:49:27.841533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.151 [2024-04-18 13:49:27.848740] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.152 [2024-04-18 13:49:27.849074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.152 [2024-04-18 13:49:27.849101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.152 [2024-04-18 13:49:27.856689] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.152 [2024-04-18 13:49:27.856951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.152 [2024-04-18 13:49:27.856988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.152 [2024-04-18 13:49:27.864725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.152 [2024-04-18 13:49:27.864986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.152 [2024-04-18 13:49:27.865013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.152 [2024-04-18 13:49:27.872834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.152 [2024-04-18 13:49:27.873118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.152 [2024-04-18 13:49:27.873144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.152 [2024-04-18 13:49:27.880914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.152 [2024-04-18 13:49:27.881203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.152 [2024-04-18 13:49:27.881259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.152 [2024-04-18 13:49:27.889153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.152 [2024-04-18 13:49:27.889479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.152 [2024-04-18 13:49:27.889520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.152 [2024-04-18 13:49:27.896716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.152 [2024-04-18 13:49:27.896980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.152 [2024-04-18 13:49:27.897014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.152 [2024-04-18 13:49:27.904693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.152 [2024-04-18 13:49:27.904980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.152 [2024-04-18 13:49:27.905007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.152 [2024-04-18 13:49:27.912882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.152 [2024-04-18 13:49:27.913148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.152 [2024-04-18 13:49:27.913197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.152 [2024-04-18 13:49:27.920493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.152 [2024-04-18 13:49:27.920780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.152 [2024-04-18 13:49:27.920808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.152 [2024-04-18 13:49:27.928190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.152 [2024-04-18 13:49:27.928469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.152 [2024-04-18 13:49:27.928508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.152 [2024-04-18 13:49:27.935717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.152 [2024-04-18 13:49:27.936006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.152 [2024-04-18 13:49:27.936033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.152 [2024-04-18 13:49:27.943424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.152 [2024-04-18 13:49:27.943704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.152 [2024-04-18 13:49:27.943731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.152 [2024-04-18 13:49:27.951895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.152 [2024-04-18 13:49:27.952206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.152 [2024-04-18 13:49:27.952233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.411 [2024-04-18 13:49:27.959769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.411 [2024-04-18 13:49:27.960034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.411 [2024-04-18 13:49:27.960062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.411 [2024-04-18 13:49:27.967646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.411 [2024-04-18 13:49:27.967923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.411 [2024-04-18 13:49:27.967958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.411 [2024-04-18 13:49:27.975646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.411 [2024-04-18 13:49:27.975911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.411 [2024-04-18 13:49:27.975938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.411 [2024-04-18 13:49:27.983809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.411 [2024-04-18 13:49:27.984079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.411 [2024-04-18 13:49:27.984106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.411 [2024-04-18 13:49:27.991612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:27.991886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:27.991925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:27.999363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:27.999649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:27.999676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.007273] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.007572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.007599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.015236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.015581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.015610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.023760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.024033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.024060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.031551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.031849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.031876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.040357] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.040757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.040784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.049630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.049988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.050029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.059232] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.059536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.059569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.067771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.068169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.068224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.075844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.076163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.076212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.083539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.083849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.083882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.091082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.091387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.091414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.098494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.098835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.098867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.107230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.107525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.107569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.115614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.115941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.115974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.123815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.124128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.124160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.131454] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.131800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.131844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.139624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.139940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.139973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.148016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.148328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.148355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.156277] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.156564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.156597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.164466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.164789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.164821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.173255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.173545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.173589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.181581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.181893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.181933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.190003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.190331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.190357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.197920] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.198252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.198278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.205650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.412 [2024-04-18 13:49:28.205970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.412 [2024-04-18 13:49:28.206003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.412 [2024-04-18 13:49:28.213508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.413 [2024-04-18 13:49:28.213827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.413 [2024-04-18 13:49:28.213867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.671 [2024-04-18 13:49:28.222069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.671 [2024-04-18 13:49:28.222466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.671 [2024-04-18 13:49:28.222505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.671 [2024-04-18 13:49:28.231008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.671 [2024-04-18 13:49:28.231336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.671 [2024-04-18 13:49:28.231364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.671 [2024-04-18 13:49:28.238847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.671 [2024-04-18 13:49:28.239156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.671 [2024-04-18 13:49:28.239196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.671 [2024-04-18 13:49:28.247201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.671 [2024-04-18 13:49:28.247483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.671 [2024-04-18 13:49:28.247529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.671 [2024-04-18 13:49:28.255500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.671 [2024-04-18 13:49:28.255826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.671 [2024-04-18 13:49:28.255858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.671 [2024-04-18 13:49:28.263877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.671 [2024-04-18 13:49:28.264205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.671 [2024-04-18 13:49:28.264245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.671 [2024-04-18 13:49:28.272294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.671 [2024-04-18 13:49:28.272633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.671 [2024-04-18 13:49:28.272662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.671 [2024-04-18 13:49:28.280690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.671 [2024-04-18 13:49:28.281007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.671 [2024-04-18 13:49:28.281040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.671 [2024-04-18 13:49:28.288952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.671 [2024-04-18 13:49:28.289341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.671 [2024-04-18 13:49:28.289368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.671 [2024-04-18 13:49:28.297635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.671 [2024-04-18 13:49:28.298004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.671 [2024-04-18 13:49:28.298035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.671 [2024-04-18 13:49:28.306340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.671 [2024-04-18 13:49:28.306691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.671 [2024-04-18 13:49:28.306723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.671 [2024-04-18 13:49:28.314418] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.671 [2024-04-18 13:49:28.314801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.671 [2024-04-18 13:49:28.314833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.322247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.322562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.322596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.330335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.330651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.330684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.339042] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.339455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.339507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.347590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.347904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.347936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.355563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.355889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.355921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.364452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.364829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.364869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.374027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.374450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.374477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.383659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.384091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.384123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.392352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.392655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.392687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.400588] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.400936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.400978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.408832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.409144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.409192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.417202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.417576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.417609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.425644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.425953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.425986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.435494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.435809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.435841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.443821] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.444267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.444294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.451846] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.452150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.452192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.459664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.459990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.460022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.467583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.467902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.467934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.672 [2024-04-18 13:49:28.475349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.672 [2024-04-18 13:49:28.475685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.672 [2024-04-18 13:49:28.475718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.483249] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.483517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.483572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.491197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.491492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.491524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.499321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.499647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.499679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.508017] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.508330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.508357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.516703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.517011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.517043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.524687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.525003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.525031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.533355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.533696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.533729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.541698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.542019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.542052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.549905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.550235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.550263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.558400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.558712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.558745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.566559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.566872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.566905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.574676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.575020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.575053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.583088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.583390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.583417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.591431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.591746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.591778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.599445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.599787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.599828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.607541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.607854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.607886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.616448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.616773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.616815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.625026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.625337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.625364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.633060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.633369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.633397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.931 [2024-04-18 13:49:28.641459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.931 [2024-04-18 13:49:28.641786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.931 [2024-04-18 13:49:28.641830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.932 [2024-04-18 13:49:28.649958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.932 [2024-04-18 13:49:28.650275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.932 [2024-04-18 13:49:28.650302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.932 [2024-04-18 13:49:28.658347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.932 [2024-04-18 13:49:28.658658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.932 [2024-04-18 13:49:28.658691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.932 [2024-04-18 13:49:28.666071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.932 [2024-04-18 13:49:28.666379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.932 [2024-04-18 13:49:28.666406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.932 [2024-04-18 13:49:28.674160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.932 [2024-04-18 13:49:28.674525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.932 [2024-04-18 13:49:28.674557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.932 [2024-04-18 13:49:28.682676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.932 [2024-04-18 13:49:28.683059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.932 [2024-04-18 13:49:28.683094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.932 [2024-04-18 13:49:28.691454] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.932 [2024-04-18 13:49:28.691781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.932 [2024-04-18 13:49:28.691824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.932 [2024-04-18 13:49:28.700309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.932 [2024-04-18 13:49:28.700620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.932 [2024-04-18 13:49:28.700660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.932 [2024-04-18 13:49:28.708386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.932 [2024-04-18 13:49:28.708707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.932 [2024-04-18 13:49:28.708740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.932 [2024-04-18 13:49:28.716436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.932 [2024-04-18 13:49:28.716774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.932 [2024-04-18 13:49:28.716807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.932 [2024-04-18 13:49:28.724284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.932 [2024-04-18 13:49:28.724586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.932 [2024-04-18 13:49:28.724619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.932 [2024-04-18 13:49:28.732205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x222a0e0) with pdu=0x2000190fef90 00:20:25.932 [2024-04-18 13:49:28.732530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.932 [2024-04-18 13:49:28.732562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.189 00:20:26.189 Latency(us) 00:20:26.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.189 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:26.189 nvme0n1 : 2.00 3697.52 462.19 0.00 0.00 4318.40 3398.16 10777.03 00:20:26.189 =================================================================================================================== 00:20:26.189 Total : 3697.52 462.19 0.00 0.00 4318.40 3398.16 10777.03 00:20:26.189 0 00:20:26.189 13:49:28 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:26.189 13:49:28 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:26.189 13:49:28 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:26.189 | .driver_specific 00:20:26.189 | .nvme_error 00:20:26.189 | .status_code 00:20:26.189 | .command_transient_transport_error' 00:20:26.189 13:49:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:26.447 13:49:29 -- host/digest.sh@71 -- # (( 238 > 0 )) 00:20:26.447 13:49:29 -- host/digest.sh@73 -- # killprocess 2676514 00:20:26.447 13:49:29 -- common/autotest_common.sh@936 -- # '[' -z 2676514 ']' 00:20:26.447 13:49:29 -- common/autotest_common.sh@940 -- # kill -0 2676514 00:20:26.447 13:49:29 -- common/autotest_common.sh@941 -- # uname 00:20:26.447 13:49:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:26.447 13:49:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2676514 00:20:26.447 13:49:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:26.447 13:49:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:26.447 13:49:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2676514' 00:20:26.447 killing process with pid 2676514 00:20:26.447 13:49:29 -- common/autotest_common.sh@955 -- # kill 2676514 00:20:26.447 Received shutdown signal, test time was about 2.000000 seconds 00:20:26.447 00:20:26.447 Latency(us) 00:20:26.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.448 =================================================================================================================== 00:20:26.448 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:26.448 13:49:29 -- common/autotest_common.sh@960 -- # wait 2676514 00:20:26.705 13:49:29 -- host/digest.sh@116 -- # killprocess 2675070 00:20:26.705 13:49:29 -- common/autotest_common.sh@936 -- # '[' -z 2675070 ']' 00:20:26.705 13:49:29 -- common/autotest_common.sh@940 -- # kill -0 2675070 00:20:26.705 13:49:29 -- common/autotest_common.sh@941 -- # uname 00:20:26.705 13:49:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:26.705 13:49:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2675070 00:20:26.705 13:49:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:26.705 13:49:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:26.705 13:49:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2675070' 00:20:26.705 killing process with pid 2675070 00:20:26.705 13:49:29 -- common/autotest_common.sh@955 -- # kill 2675070 00:20:26.705 13:49:29 -- common/autotest_common.sh@960 -- # wait 2675070 00:20:26.963 00:20:26.963 real 0m15.587s 00:20:26.963 user 0m30.401s 00:20:26.963 sys 0m4.669s 00:20:26.963 13:49:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:26.963 13:49:29 -- common/autotest_common.sh@10 -- # set +x 00:20:26.963 ************************************ 00:20:26.963 END TEST nvmf_digest_error 00:20:26.963 ************************************ 00:20:26.963 13:49:29 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:26.963 13:49:29 -- host/digest.sh@150 -- # nvmftestfini 00:20:26.963 13:49:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:26.963 13:49:29 -- nvmf/common.sh@117 -- # sync 00:20:26.963 13:49:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:26.963 13:49:29 -- nvmf/common.sh@120 -- # set +e 00:20:26.963 13:49:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:26.963 13:49:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:26.963 rmmod nvme_tcp 00:20:26.963 rmmod nvme_fabrics 00:20:26.963 rmmod nvme_keyring 00:20:26.963 13:49:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:26.963 13:49:29 -- nvmf/common.sh@124 -- # set -e 00:20:26.963 13:49:29 -- nvmf/common.sh@125 -- # return 0 00:20:26.963 13:49:29 -- nvmf/common.sh@478 -- # '[' -n 2675070 ']' 00:20:26.963 13:49:29 -- nvmf/common.sh@479 -- # killprocess 2675070 00:20:26.963 13:49:29 -- common/autotest_common.sh@936 -- # '[' -z 2675070 ']' 00:20:26.963 13:49:29 -- common/autotest_common.sh@940 -- # kill -0 2675070 00:20:26.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2675070) - No such process 00:20:26.963 13:49:29 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2675070 is not found' 00:20:26.963 Process with pid 2675070 is not found 00:20:26.963 13:49:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:26.963 13:49:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:26.963 13:49:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:26.963 13:49:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.963 13:49:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:26.963 13:49:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.963 13:49:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.963 13:49:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.493 13:49:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:29.493 00:20:29.493 real 0m35.415s 00:20:29.493 user 1m1.131s 00:20:29.493 sys 0m11.014s 00:20:29.493 13:49:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:29.493 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:20:29.493 ************************************ 00:20:29.493 END TEST nvmf_digest 00:20:29.493 ************************************ 00:20:29.493 13:49:31 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:20:29.493 13:49:31 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:20:29.493 13:49:31 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:20:29.493 13:49:31 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:20:29.493 13:49:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:29.493 13:49:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:29.493 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:20:29.493 ************************************ 00:20:29.493 START TEST nvmf_bdevperf 00:20:29.493 ************************************ 00:20:29.493 13:49:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:20:29.493 * Looking for test storage... 00:20:29.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:29.493 13:49:31 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:29.493 13:49:31 -- nvmf/common.sh@7 -- # uname -s 00:20:29.493 13:49:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.493 13:49:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.493 13:49:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.493 13:49:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.493 13:49:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.493 13:49:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.493 13:49:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.493 13:49:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.493 13:49:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.493 13:49:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.493 13:49:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:29.493 13:49:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:29.493 13:49:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.493 13:49:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.493 13:49:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:29.493 13:49:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.493 13:49:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:29.493 13:49:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.493 13:49:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.493 13:49:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.493 13:49:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.493 13:49:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.493 13:49:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.493 13:49:31 -- paths/export.sh@5 -- # export PATH 00:20:29.493 13:49:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.493 13:49:31 -- nvmf/common.sh@47 -- # : 0 00:20:29.493 13:49:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:29.493 13:49:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:29.493 13:49:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.493 13:49:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.493 13:49:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.493 13:49:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:29.493 13:49:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:29.493 13:49:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:29.493 13:49:31 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:29.493 13:49:31 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:29.493 13:49:31 -- host/bdevperf.sh@24 -- # nvmftestinit 00:20:29.493 13:49:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:29.493 13:49:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.493 13:49:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:29.493 13:49:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:29.493 13:49:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:29.493 13:49:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.493 13:49:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.493 13:49:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.493 13:49:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:29.493 13:49:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:29.493 13:49:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:29.493 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:20:31.394 13:49:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:31.394 13:49:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:31.394 13:49:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:31.394 13:49:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:31.394 13:49:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:31.394 13:49:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:31.394 13:49:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:31.394 13:49:33 -- nvmf/common.sh@295 -- # net_devs=() 00:20:31.394 13:49:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:31.394 13:49:33 -- nvmf/common.sh@296 -- # e810=() 00:20:31.394 13:49:33 -- nvmf/common.sh@296 -- # local -ga e810 00:20:31.394 13:49:33 -- nvmf/common.sh@297 -- # x722=() 00:20:31.394 13:49:33 -- nvmf/common.sh@297 -- # local -ga x722 00:20:31.394 13:49:33 -- nvmf/common.sh@298 -- # mlx=() 00:20:31.394 13:49:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:31.394 13:49:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:31.394 13:49:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:31.394 13:49:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:31.394 13:49:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:31.394 13:49:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:31.394 13:49:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:31.394 13:49:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:31.394 13:49:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:31.394 13:49:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:31.394 13:49:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:31.394 13:49:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:31.394 13:49:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:31.394 13:49:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:31.394 13:49:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:31.394 13:49:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:31.394 13:49:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:31.394 13:49:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:31.394 13:49:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.394 13:49:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:31.394 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:31.394 13:49:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.394 13:49:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.395 13:49:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.395 13:49:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.395 13:49:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.395 13:49:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.395 13:49:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:31.395 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:31.395 13:49:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.395 13:49:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.395 13:49:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.395 13:49:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.395 13:49:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.395 13:49:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:31.395 13:49:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:31.395 13:49:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:31.395 13:49:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.395 13:49:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.395 13:49:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:31.395 13:49:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.395 13:49:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:31.395 Found net devices under 0000:84:00.0: cvl_0_0 00:20:31.395 13:49:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.395 13:49:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.395 13:49:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.395 13:49:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:31.395 13:49:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.395 13:49:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:31.395 Found net devices under 0000:84:00.1: cvl_0_1 00:20:31.395 13:49:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.395 13:49:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:31.395 13:49:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:31.395 13:49:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:31.395 13:49:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:31.395 13:49:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:31.395 13:49:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:31.395 13:49:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:31.395 13:49:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:31.395 13:49:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:31.395 13:49:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:31.395 13:49:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:31.395 13:49:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:31.395 13:49:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:31.395 13:49:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:31.395 13:49:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:31.395 13:49:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:31.395 13:49:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:31.395 13:49:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:31.395 13:49:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:31.395 13:49:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:31.395 13:49:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:31.395 13:49:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:31.395 13:49:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:31.395 13:49:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:31.395 13:49:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:31.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:31.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:20:31.395 00:20:31.395 --- 10.0.0.2 ping statistics --- 00:20:31.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.395 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:20:31.395 13:49:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:31.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:31.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:20:31.395 00:20:31.395 --- 10.0.0.1 ping statistics --- 00:20:31.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.395 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:20:31.395 13:49:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:31.395 13:49:34 -- nvmf/common.sh@411 -- # return 0 00:20:31.395 13:49:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:31.395 13:49:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:31.395 13:49:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:31.395 13:49:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:31.395 13:49:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:31.395 13:49:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:31.395 13:49:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:31.395 13:49:34 -- host/bdevperf.sh@25 -- # tgt_init 00:20:31.395 13:49:34 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:31.395 13:49:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:31.395 13:49:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:31.395 13:49:34 -- common/autotest_common.sh@10 -- # set +x 00:20:31.395 13:49:34 -- nvmf/common.sh@470 -- # nvmfpid=2678933 00:20:31.395 13:49:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:31.395 13:49:34 -- nvmf/common.sh@471 -- # waitforlisten 2678933 00:20:31.395 13:49:34 -- common/autotest_common.sh@817 -- # '[' -z 2678933 ']' 00:20:31.395 13:49:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.395 13:49:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:31.395 13:49:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.395 13:49:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:31.395 13:49:34 -- common/autotest_common.sh@10 -- # set +x 00:20:31.653 [2024-04-18 13:49:34.205671] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:31.653 [2024-04-18 13:49:34.205765] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.653 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.653 [2024-04-18 13:49:34.275311] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:31.653 [2024-04-18 13:49:34.390961] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.653 [2024-04-18 13:49:34.391040] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.653 [2024-04-18 13:49:34.391056] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.653 [2024-04-18 13:49:34.391070] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.653 [2024-04-18 13:49:34.391082] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.653 [2024-04-18 13:49:34.391173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.653 [2024-04-18 13:49:34.391299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:31.653 [2024-04-18 13:49:34.391303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.585 13:49:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:32.585 13:49:35 -- common/autotest_common.sh@850 -- # return 0 00:20:32.585 13:49:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:32.585 13:49:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:32.585 13:49:35 -- common/autotest_common.sh@10 -- # set +x 00:20:32.585 13:49:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.585 13:49:35 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:32.585 13:49:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.585 13:49:35 -- common/autotest_common.sh@10 -- # set +x 00:20:32.585 [2024-04-18 13:49:35.205020] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.585 13:49:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.585 13:49:35 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:32.585 13:49:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.585 13:49:35 -- common/autotest_common.sh@10 -- # set +x 00:20:32.585 Malloc0 00:20:32.585 13:49:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.585 13:49:35 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:32.585 13:49:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.585 13:49:35 -- common/autotest_common.sh@10 -- # set +x 00:20:32.585 13:49:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.585 13:49:35 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:32.585 13:49:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.585 13:49:35 -- common/autotest_common.sh@10 -- # set +x 00:20:32.585 13:49:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.585 13:49:35 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:32.585 13:49:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.585 13:49:35 -- common/autotest_common.sh@10 -- # set +x 00:20:32.585 [2024-04-18 13:49:35.274247] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.585 13:49:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.585 13:49:35 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:20:32.585 13:49:35 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:20:32.585 13:49:35 -- nvmf/common.sh@521 -- # config=() 00:20:32.585 13:49:35 -- nvmf/common.sh@521 -- # local subsystem config 00:20:32.585 13:49:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:32.585 13:49:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:32.585 { 00:20:32.586 "params": { 00:20:32.586 "name": "Nvme$subsystem", 00:20:32.586 "trtype": "$TEST_TRANSPORT", 00:20:32.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.586 "adrfam": "ipv4", 00:20:32.586 "trsvcid": "$NVMF_PORT", 00:20:32.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.586 "hdgst": ${hdgst:-false}, 00:20:32.586 "ddgst": ${ddgst:-false} 00:20:32.586 }, 00:20:32.586 "method": "bdev_nvme_attach_controller" 00:20:32.586 } 00:20:32.586 EOF 00:20:32.586 )") 00:20:32.586 13:49:35 -- nvmf/common.sh@543 -- # cat 00:20:32.586 13:49:35 -- nvmf/common.sh@545 -- # jq . 00:20:32.586 13:49:35 -- nvmf/common.sh@546 -- # IFS=, 00:20:32.586 13:49:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:32.586 "params": { 00:20:32.586 "name": "Nvme1", 00:20:32.586 "trtype": "tcp", 00:20:32.586 "traddr": "10.0.0.2", 00:20:32.586 "adrfam": "ipv4", 00:20:32.586 "trsvcid": "4420", 00:20:32.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:32.586 "hdgst": false, 00:20:32.586 "ddgst": false 00:20:32.586 }, 00:20:32.586 "method": "bdev_nvme_attach_controller" 00:20:32.586 }' 00:20:32.586 [2024-04-18 13:49:35.321108] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:32.586 [2024-04-18 13:49:35.321206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679087 ] 00:20:32.586 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.586 [2024-04-18 13:49:35.381257] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.843 [2024-04-18 13:49:35.494855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.101 Running I/O for 1 seconds... 00:20:34.033 00:20:34.033 Latency(us) 00:20:34.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.033 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:34.033 Verification LBA range: start 0x0 length 0x4000 00:20:34.033 Nvme1n1 : 1.01 8695.92 33.97 0.00 0.00 14664.52 3179.71 15437.37 00:20:34.033 =================================================================================================================== 00:20:34.033 Total : 8695.92 33.97 0.00 0.00 14664.52 3179.71 15437.37 00:20:34.291 13:49:36 -- host/bdevperf.sh@30 -- # bdevperfpid=2679352 00:20:34.291 13:49:36 -- host/bdevperf.sh@32 -- # sleep 3 00:20:34.291 13:49:36 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:20:34.291 13:49:36 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:20:34.291 13:49:36 -- nvmf/common.sh@521 -- # config=() 00:20:34.291 13:49:36 -- nvmf/common.sh@521 -- # local subsystem config 00:20:34.291 13:49:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:34.291 13:49:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:34.291 { 00:20:34.291 "params": { 00:20:34.291 "name": "Nvme$subsystem", 00:20:34.291 "trtype": "$TEST_TRANSPORT", 00:20:34.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.291 "adrfam": "ipv4", 00:20:34.291 "trsvcid": "$NVMF_PORT", 00:20:34.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.291 "hdgst": ${hdgst:-false}, 00:20:34.291 "ddgst": ${ddgst:-false} 00:20:34.291 }, 00:20:34.291 "method": "bdev_nvme_attach_controller" 00:20:34.291 } 00:20:34.291 EOF 00:20:34.291 )") 00:20:34.291 13:49:36 -- nvmf/common.sh@543 -- # cat 00:20:34.291 13:49:36 -- nvmf/common.sh@545 -- # jq . 00:20:34.291 13:49:36 -- nvmf/common.sh@546 -- # IFS=, 00:20:34.292 13:49:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:34.292 "params": { 00:20:34.292 "name": "Nvme1", 00:20:34.292 "trtype": "tcp", 00:20:34.292 "traddr": "10.0.0.2", 00:20:34.292 "adrfam": "ipv4", 00:20:34.292 "trsvcid": "4420", 00:20:34.292 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.292 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.292 "hdgst": false, 00:20:34.292 "ddgst": false 00:20:34.292 }, 00:20:34.292 "method": "bdev_nvme_attach_controller" 00:20:34.292 }' 00:20:34.292 [2024-04-18 13:49:37.025271] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:34.292 [2024-04-18 13:49:37.025362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679352 ] 00:20:34.292 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.292 [2024-04-18 13:49:37.085053] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.550 [2024-04-18 13:49:37.189333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.808 Running I/O for 15 seconds... 00:20:37.338 13:49:39 -- host/bdevperf.sh@33 -- # kill -9 2678933 00:20:37.338 13:49:39 -- host/bdevperf.sh@35 -- # sleep 3 00:20:37.338 [2024-04-18 13:49:39.999400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.338 [2024-04-18 13:49:39.999487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.338 [2024-04-18 13:49:39.999524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.338 [2024-04-18 13:49:39.999546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.338 [2024-04-18 13:49:39.999565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.338 [2024-04-18 13:49:39.999582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.338 [2024-04-18 13:49:39.999599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.338 [2024-04-18 13:49:39.999615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:39.999636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:39.999656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:39.999675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:39.999693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:39.999723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:39.999741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:39.999760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:39.999779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:39.999798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:39.999816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:39.999836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:39.999851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:39.999869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:39.999885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:39.999903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:39.999921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:39.999941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:39.999958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:39.999977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:39.999995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.000973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.339 [2024-04-18 13:49:40.000989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.339 [2024-04-18 13:49:40.001006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.001979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.001995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.002011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.002027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.002043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.002059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.002074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.002091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.002107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.002123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.002138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.002155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.002170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.002201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.002234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.002249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.002263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.002279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.002294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.002309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.340 [2024-04-18 13:49:40.002323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.340 [2024-04-18 13:49:40.002338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.002355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.002388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.002418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.002447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.341 [2024-04-18 13:49:40.002496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.341 [2024-04-18 13:49:40.002529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.341 [2024-04-18 13:49:40.002561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.341 [2024-04-18 13:49:40.002594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.341 [2024-04-18 13:49:40.002625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.341 [2024-04-18 13:49:40.002658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.341 [2024-04-18 13:49:40.002690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.002722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.002754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.002791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.002823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.002855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.002887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.002920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.002953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.002970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.002986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.341 [2024-04-18 13:49:40.003648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.341 [2024-04-18 13:49:40.003663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.342 [2024-04-18 13:49:40.003680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.342 [2024-04-18 13:49:40.003696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.342 [2024-04-18 13:49:40.003712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.342 [2024-04-18 13:49:40.003728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.342 [2024-04-18 13:49:40.003744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.342 [2024-04-18 13:49:40.003759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.342 [2024-04-18 13:49:40.003777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.342 [2024-04-18 13:49:40.003792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.342 [2024-04-18 13:49:40.003809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.342 [2024-04-18 13:49:40.003824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.342 [2024-04-18 13:49:40.003841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.342 [2024-04-18 13:49:40.003856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.342 [2024-04-18 13:49:40.003872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x277c290 is same with the state(5) to be set 00:20:37.342 [2024-04-18 13:49:40.003891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:37.342 [2024-04-18 13:49:40.003903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:37.342 [2024-04-18 13:49:40.003916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40544 len:8 PRP1 0x0 PRP2 0x0 00:20:37.342 [2024-04-18 13:49:40.003931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.342 [2024-04-18 13:49:40.004003] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x277c290 was disconnected and freed. reset controller. 00:20:37.342 [2024-04-18 13:49:40.008121] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.342 [2024-04-18 13:49:40.008225] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.342 [2024-04-18 13:49:40.008987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.342 [2024-04-18 13:49:40.009194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.342 [2024-04-18 13:49:40.009230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.342 [2024-04-18 13:49:40.009249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.342 [2024-04-18 13:49:40.009496] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.342 [2024-04-18 13:49:40.009738] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.342 [2024-04-18 13:49:40.009762] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.342 [2024-04-18 13:49:40.009780] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.342 [2024-04-18 13:49:40.013342] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.342 [2024-04-18 13:49:40.022352] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.342 [2024-04-18 13:49:40.022863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.342 [2024-04-18 13:49:40.023136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.342 [2024-04-18 13:49:40.023195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.342 [2024-04-18 13:49:40.023215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.342 [2024-04-18 13:49:40.023452] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.342 [2024-04-18 13:49:40.023704] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.342 [2024-04-18 13:49:40.023730] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.342 [2024-04-18 13:49:40.023747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.342 [2024-04-18 13:49:40.027307] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.342 [2024-04-18 13:49:40.036321] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.342 [2024-04-18 13:49:40.036871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.342 [2024-04-18 13:49:40.037098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.342 [2024-04-18 13:49:40.037148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.342 [2024-04-18 13:49:40.037166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.342 [2024-04-18 13:49:40.037417] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.342 [2024-04-18 13:49:40.037660] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.342 [2024-04-18 13:49:40.037685] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.342 [2024-04-18 13:49:40.037701] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.342 [2024-04-18 13:49:40.041258] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.342 [2024-04-18 13:49:40.050926] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.342 [2024-04-18 13:49:40.051355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.342 [2024-04-18 13:49:40.051570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.342 [2024-04-18 13:49:40.051600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.342 [2024-04-18 13:49:40.051617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.342 [2024-04-18 13:49:40.051855] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.342 [2024-04-18 13:49:40.052103] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.342 [2024-04-18 13:49:40.052128] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.342 [2024-04-18 13:49:40.052144] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.342 [2024-04-18 13:49:40.055704] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.342 [2024-04-18 13:49:40.064915] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.342 [2024-04-18 13:49:40.065425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.342 [2024-04-18 13:49:40.065664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.342 [2024-04-18 13:49:40.065695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.342 [2024-04-18 13:49:40.065713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.342 [2024-04-18 13:49:40.065950] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.342 [2024-04-18 13:49:40.066205] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.342 [2024-04-18 13:49:40.066230] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.342 [2024-04-18 13:49:40.066246] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.342 [2024-04-18 13:49:40.069792] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.342 [2024-04-18 13:49:40.078798] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.342 [2024-04-18 13:49:40.079264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.342 [2024-04-18 13:49:40.079529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.342 [2024-04-18 13:49:40.079559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.342 [2024-04-18 13:49:40.079577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.342 [2024-04-18 13:49:40.079814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.342 [2024-04-18 13:49:40.080055] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.342 [2024-04-18 13:49:40.080079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.342 [2024-04-18 13:49:40.080095] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.342 [2024-04-18 13:49:40.083678] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.342 [2024-04-18 13:49:40.092672] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.342 [2024-04-18 13:49:40.093175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.342 [2024-04-18 13:49:40.093465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.342 [2024-04-18 13:49:40.093495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.342 [2024-04-18 13:49:40.093513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.342 [2024-04-18 13:49:40.093750] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.342 [2024-04-18 13:49:40.093992] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.342 [2024-04-18 13:49:40.094022] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.342 [2024-04-18 13:49:40.094039] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.342 [2024-04-18 13:49:40.097591] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.342 [2024-04-18 13:49:40.106603] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.342 [2024-04-18 13:49:40.107028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.342 [2024-04-18 13:49:40.107189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.343 [2024-04-18 13:49:40.107218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.343 [2024-04-18 13:49:40.107236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.343 [2024-04-18 13:49:40.107473] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.343 [2024-04-18 13:49:40.107713] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.343 [2024-04-18 13:49:40.107737] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.343 [2024-04-18 13:49:40.107752] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.343 [2024-04-18 13:49:40.111302] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.343 [2024-04-18 13:49:40.120468] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.343 [2024-04-18 13:49:40.120937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.343 [2024-04-18 13:49:40.121137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.343 [2024-04-18 13:49:40.121166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.343 [2024-04-18 13:49:40.121192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.343 [2024-04-18 13:49:40.121438] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.343 [2024-04-18 13:49:40.121702] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.343 [2024-04-18 13:49:40.121726] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.343 [2024-04-18 13:49:40.121741] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.343 [2024-04-18 13:49:40.125254] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.343 [2024-04-18 13:49:40.134541] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.343 [2024-04-18 13:49:40.134907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.343 [2024-04-18 13:49:40.135094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.343 [2024-04-18 13:49:40.135123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.343 [2024-04-18 13:49:40.135140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.343 [2024-04-18 13:49:40.135392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.343 [2024-04-18 13:49:40.135644] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.343 [2024-04-18 13:49:40.135668] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.343 [2024-04-18 13:49:40.135689] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.343 [2024-04-18 13:49:40.139307] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.602 [2024-04-18 13:49:40.148415] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.602 [2024-04-18 13:49:40.149415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.602 [2024-04-18 13:49:40.149599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.602 [2024-04-18 13:49:40.149667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.602 [2024-04-18 13:49:40.149686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.602 [2024-04-18 13:49:40.149925] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.602 [2024-04-18 13:49:40.150168] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.602 [2024-04-18 13:49:40.150202] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.602 [2024-04-18 13:49:40.150235] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.602 [2024-04-18 13:49:40.153817] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.602 [2024-04-18 13:49:40.162279] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.602 [2024-04-18 13:49:40.162786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.602 [2024-04-18 13:49:40.162989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.602 [2024-04-18 13:49:40.163012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.602 [2024-04-18 13:49:40.163027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.602 [2024-04-18 13:49:40.163256] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.602 [2024-04-18 13:49:40.163489] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.602 [2024-04-18 13:49:40.163510] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.602 [2024-04-18 13:49:40.163550] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.602 [2024-04-18 13:49:40.167113] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.602 [2024-04-18 13:49:40.176210] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.602 [2024-04-18 13:49:40.176644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.602 [2024-04-18 13:49:40.176836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.602 [2024-04-18 13:49:40.176871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.602 [2024-04-18 13:49:40.176888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.602 [2024-04-18 13:49:40.177124] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.602 [2024-04-18 13:49:40.177377] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.602 [2024-04-18 13:49:40.177400] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.602 [2024-04-18 13:49:40.177415] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.602 [2024-04-18 13:49:40.181013] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.602 [2024-04-18 13:49:40.190086] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.602 [2024-04-18 13:49:40.190516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.602 [2024-04-18 13:49:40.190751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.602 [2024-04-18 13:49:40.190801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.602 [2024-04-18 13:49:40.190818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.602 [2024-04-18 13:49:40.191054] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.602 [2024-04-18 13:49:40.191315] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.602 [2024-04-18 13:49:40.191338] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.602 [2024-04-18 13:49:40.191352] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.602 [2024-04-18 13:49:40.194871] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.602 [2024-04-18 13:49:40.204072] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.602 [2024-04-18 13:49:40.204482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.602 [2024-04-18 13:49:40.204747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.602 [2024-04-18 13:49:40.204775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.602 [2024-04-18 13:49:40.204793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.602 [2024-04-18 13:49:40.205035] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.602 [2024-04-18 13:49:40.205296] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.602 [2024-04-18 13:49:40.205319] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.602 [2024-04-18 13:49:40.205333] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.602 [2024-04-18 13:49:40.208869] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.603 [2024-04-18 13:49:40.217965] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.603 [2024-04-18 13:49:40.218377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.603 [2024-04-18 13:49:40.218531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.603 [2024-04-18 13:49:40.218559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.603 [2024-04-18 13:49:40.218577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.603 [2024-04-18 13:49:40.218813] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.603 [2024-04-18 13:49:40.219053] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.603 [2024-04-18 13:49:40.219076] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.603 [2024-04-18 13:49:40.219092] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.603 [2024-04-18 13:49:40.222621] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.603 [2024-04-18 13:49:40.231955] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.603 [2024-04-18 13:49:40.232492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.603 [2024-04-18 13:49:40.232698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.603 [2024-04-18 13:49:40.232747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.603 [2024-04-18 13:49:40.232765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.603 [2024-04-18 13:49:40.233002] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.603 [2024-04-18 13:49:40.233265] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.603 [2024-04-18 13:49:40.233288] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.603 [2024-04-18 13:49:40.233302] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.603 [2024-04-18 13:49:40.236894] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.603 [2024-04-18 13:49:40.245973] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.603 [2024-04-18 13:49:40.246363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.603 [2024-04-18 13:49:40.246577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.603 [2024-04-18 13:49:40.246630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.603 [2024-04-18 13:49:40.246648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.603 [2024-04-18 13:49:40.246884] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.603 [2024-04-18 13:49:40.247124] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.603 [2024-04-18 13:49:40.247148] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.603 [2024-04-18 13:49:40.247173] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.603 [2024-04-18 13:49:40.250727] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.603 [2024-04-18 13:49:40.259934] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.603 [2024-04-18 13:49:40.260374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.603 [2024-04-18 13:49:40.260602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.603 [2024-04-18 13:49:40.260630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.603 [2024-04-18 13:49:40.260647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.603 [2024-04-18 13:49:40.260884] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.603 [2024-04-18 13:49:40.261125] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.603 [2024-04-18 13:49:40.261158] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.603 [2024-04-18 13:49:40.261173] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.603 [2024-04-18 13:49:40.264739] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.603 [2024-04-18 13:49:40.273754] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.603 [2024-04-18 13:49:40.274230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.603 [2024-04-18 13:49:40.274365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.603 [2024-04-18 13:49:40.274394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.603 [2024-04-18 13:49:40.274412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.603 [2024-04-18 13:49:40.274649] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.603 [2024-04-18 13:49:40.274891] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.603 [2024-04-18 13:49:40.274914] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.603 [2024-04-18 13:49:40.274929] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.603 [2024-04-18 13:49:40.278506] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.603 [2024-04-18 13:49:40.287719] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.603 [2024-04-18 13:49:40.288094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.603 [2024-04-18 13:49:40.288261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.603 [2024-04-18 13:49:40.288291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.603 [2024-04-18 13:49:40.288308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.603 [2024-04-18 13:49:40.288544] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.603 [2024-04-18 13:49:40.288789] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.603 [2024-04-18 13:49:40.288812] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.603 [2024-04-18 13:49:40.288827] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.603 [2024-04-18 13:49:40.292381] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.603 [2024-04-18 13:49:40.301382] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.603 [2024-04-18 13:49:40.301793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.603 [2024-04-18 13:49:40.301970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.603 [2024-04-18 13:49:40.301996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.603 [2024-04-18 13:49:40.302011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.603 [2024-04-18 13:49:40.302238] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.603 [2024-04-18 13:49:40.302454] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.603 [2024-04-18 13:49:40.302478] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.603 [2024-04-18 13:49:40.302492] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.603 [2024-04-18 13:49:40.305678] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.603 [2024-04-18 13:49:40.314734] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.603 [2024-04-18 13:49:40.315085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.603 [2024-04-18 13:49:40.315241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.603 [2024-04-18 13:49:40.315270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.603 [2024-04-18 13:49:40.315286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.603 [2024-04-18 13:49:40.315479] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.603 [2024-04-18 13:49:40.315677] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.603 [2024-04-18 13:49:40.315696] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.604 [2024-04-18 13:49:40.315709] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.604 [2024-04-18 13:49:40.319195] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.604 [2024-04-18 13:49:40.328661] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.604 [2024-04-18 13:49:40.329074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.604 [2024-04-18 13:49:40.329201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.604 [2024-04-18 13:49:40.329227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.604 [2024-04-18 13:49:40.329243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.604 [2024-04-18 13:49:40.329456] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.604 [2024-04-18 13:49:40.329680] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.604 [2024-04-18 13:49:40.329701] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.604 [2024-04-18 13:49:40.329716] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.604 [2024-04-18 13:49:40.333253] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.604 [2024-04-18 13:49:40.342502] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.604 [2024-04-18 13:49:40.342890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.604 [2024-04-18 13:49:40.343062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.604 [2024-04-18 13:49:40.343091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.604 [2024-04-18 13:49:40.343108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.604 [2024-04-18 13:49:40.343366] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.604 [2024-04-18 13:49:40.343608] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.604 [2024-04-18 13:49:40.343631] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.604 [2024-04-18 13:49:40.343647] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.604 [2024-04-18 13:49:40.347203] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.604 [2024-04-18 13:49:40.356416] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.604 [2024-04-18 13:49:40.356867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.604 [2024-04-18 13:49:40.357095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.604 [2024-04-18 13:49:40.357123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.604 [2024-04-18 13:49:40.357146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.604 [2024-04-18 13:49:40.357393] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.604 [2024-04-18 13:49:40.357635] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.604 [2024-04-18 13:49:40.357659] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.604 [2024-04-18 13:49:40.357675] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.604 [2024-04-18 13:49:40.361222] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.604 [2024-04-18 13:49:40.370246] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.604 [2024-04-18 13:49:40.370750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.604 [2024-04-18 13:49:40.370949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.604 [2024-04-18 13:49:40.370997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.604 [2024-04-18 13:49:40.371026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.604 [2024-04-18 13:49:40.371272] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.604 [2024-04-18 13:49:40.371523] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.604 [2024-04-18 13:49:40.371558] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.604 [2024-04-18 13:49:40.371574] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.604 [2024-04-18 13:49:40.375114] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.604 [2024-04-18 13:49:40.384101] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.604 [2024-04-18 13:49:40.384504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.604 [2024-04-18 13:49:40.384687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.604 [2024-04-18 13:49:40.384736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.604 [2024-04-18 13:49:40.384754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.604 [2024-04-18 13:49:40.384990] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.604 [2024-04-18 13:49:40.385241] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.604 [2024-04-18 13:49:40.385266] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.604 [2024-04-18 13:49:40.385282] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.604 [2024-04-18 13:49:40.388823] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.604 [2024-04-18 13:49:40.398035] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.604 [2024-04-18 13:49:40.398484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.604 [2024-04-18 13:49:40.398661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.604 [2024-04-18 13:49:40.398711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.604 [2024-04-18 13:49:40.398728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.604 [2024-04-18 13:49:40.398969] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.604 [2024-04-18 13:49:40.399222] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.604 [2024-04-18 13:49:40.399247] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.604 [2024-04-18 13:49:40.399263] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.604 [2024-04-18 13:49:40.402803] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.863 [2024-04-18 13:49:40.411999] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.863 [2024-04-18 13:49:40.412442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.863 [2024-04-18 13:49:40.412620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.863 [2024-04-18 13:49:40.412673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.863 [2024-04-18 13:49:40.412690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.863 [2024-04-18 13:49:40.412928] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.863 [2024-04-18 13:49:40.413169] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.863 [2024-04-18 13:49:40.413202] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.863 [2024-04-18 13:49:40.413219] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.863 [2024-04-18 13:49:40.416779] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.863 [2024-04-18 13:49:40.425990] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.863 [2024-04-18 13:49:40.426390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.863 [2024-04-18 13:49:40.426575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.863 [2024-04-18 13:49:40.426633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.863 [2024-04-18 13:49:40.426651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.863 [2024-04-18 13:49:40.426887] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.863 [2024-04-18 13:49:40.427128] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.863 [2024-04-18 13:49:40.427151] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.863 [2024-04-18 13:49:40.427167] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.863 [2024-04-18 13:49:40.430730] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.863 [2024-04-18 13:49:40.439983] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.863 [2024-04-18 13:49:40.440368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.863 [2024-04-18 13:49:40.440595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.863 [2024-04-18 13:49:40.440646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.863 [2024-04-18 13:49:40.440663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.863 [2024-04-18 13:49:40.440900] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.863 [2024-04-18 13:49:40.441145] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.863 [2024-04-18 13:49:40.441171] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.863 [2024-04-18 13:49:40.441199] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.863 [2024-04-18 13:49:40.444749] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.863 [2024-04-18 13:49:40.453958] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.863 [2024-04-18 13:49:40.454472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.863 [2024-04-18 13:49:40.454673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.863 [2024-04-18 13:49:40.454721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.863 [2024-04-18 13:49:40.454738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.863 [2024-04-18 13:49:40.454981] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.863 [2024-04-18 13:49:40.455234] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.863 [2024-04-18 13:49:40.455258] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.863 [2024-04-18 13:49:40.455274] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.863 [2024-04-18 13:49:40.458813] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.863 [2024-04-18 13:49:40.467835] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.863 [2024-04-18 13:49:40.468292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.863 [2024-04-18 13:49:40.468466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.863 [2024-04-18 13:49:40.468521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.863 [2024-04-18 13:49:40.468539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.863 [2024-04-18 13:49:40.468775] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.863 [2024-04-18 13:49:40.469016] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.863 [2024-04-18 13:49:40.469040] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.863 [2024-04-18 13:49:40.469055] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.863 [2024-04-18 13:49:40.472611] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.863 [2024-04-18 13:49:40.481814] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.863 [2024-04-18 13:49:40.482310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.863 [2024-04-18 13:49:40.482477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.863 [2024-04-18 13:49:40.482527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.863 [2024-04-18 13:49:40.482544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.863 [2024-04-18 13:49:40.482781] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.863 [2024-04-18 13:49:40.483021] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.863 [2024-04-18 13:49:40.483050] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.863 [2024-04-18 13:49:40.483067] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.863 [2024-04-18 13:49:40.486634] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.863 [2024-04-18 13:49:40.495630] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.863 [2024-04-18 13:49:40.496030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.863 [2024-04-18 13:49:40.496227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.496257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.864 [2024-04-18 13:49:40.496274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.864 [2024-04-18 13:49:40.496512] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.864 [2024-04-18 13:49:40.496753] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.864 [2024-04-18 13:49:40.496777] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.864 [2024-04-18 13:49:40.496792] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.864 [2024-04-18 13:49:40.500342] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.864 [2024-04-18 13:49:40.509564] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.864 [2024-04-18 13:49:40.510036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.510200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.510229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.864 [2024-04-18 13:49:40.510247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.864 [2024-04-18 13:49:40.510485] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.864 [2024-04-18 13:49:40.510725] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.864 [2024-04-18 13:49:40.510748] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.864 [2024-04-18 13:49:40.510764] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.864 [2024-04-18 13:49:40.514322] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.864 [2024-04-18 13:49:40.523528] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.864 [2024-04-18 13:49:40.524034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.524159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.524195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.864 [2024-04-18 13:49:40.524214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.864 [2024-04-18 13:49:40.524450] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.864 [2024-04-18 13:49:40.524701] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.864 [2024-04-18 13:49:40.524725] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.864 [2024-04-18 13:49:40.524745] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.864 [2024-04-18 13:49:40.528295] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.864 [2024-04-18 13:49:40.537518] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.864 [2024-04-18 13:49:40.538020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.538170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.538206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.864 [2024-04-18 13:49:40.538224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.864 [2024-04-18 13:49:40.538460] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.864 [2024-04-18 13:49:40.538701] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.864 [2024-04-18 13:49:40.538724] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.864 [2024-04-18 13:49:40.538740] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.864 [2024-04-18 13:49:40.542297] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.864 [2024-04-18 13:49:40.551503] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.864 [2024-04-18 13:49:40.551952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.552103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.552132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.864 [2024-04-18 13:49:40.552149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.864 [2024-04-18 13:49:40.552409] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.864 [2024-04-18 13:49:40.552650] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.864 [2024-04-18 13:49:40.552674] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.864 [2024-04-18 13:49:40.552689] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.864 [2024-04-18 13:49:40.556233] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.864 [2024-04-18 13:49:40.565450] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.864 [2024-04-18 13:49:40.565907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.566128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.566156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.864 [2024-04-18 13:49:40.566174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.864 [2024-04-18 13:49:40.566421] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.864 [2024-04-18 13:49:40.566662] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.864 [2024-04-18 13:49:40.566686] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.864 [2024-04-18 13:49:40.566701] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.864 [2024-04-18 13:49:40.570254] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.864 [2024-04-18 13:49:40.579466] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.864 [2024-04-18 13:49:40.579965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.580131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.580160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.864 [2024-04-18 13:49:40.580187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.864 [2024-04-18 13:49:40.580427] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.864 [2024-04-18 13:49:40.580668] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.864 [2024-04-18 13:49:40.580691] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.864 [2024-04-18 13:49:40.580707] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.864 [2024-04-18 13:49:40.584254] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.864 [2024-04-18 13:49:40.593452] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.864 [2024-04-18 13:49:40.593952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.594161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.594200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.864 [2024-04-18 13:49:40.594219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.864 [2024-04-18 13:49:40.594455] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.864 [2024-04-18 13:49:40.594696] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.864 [2024-04-18 13:49:40.594728] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.864 [2024-04-18 13:49:40.594743] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.864 [2024-04-18 13:49:40.598291] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.864 [2024-04-18 13:49:40.607301] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.864 [2024-04-18 13:49:40.607730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.607922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.607970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.864 [2024-04-18 13:49:40.607988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.864 [2024-04-18 13:49:40.608235] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.864 [2024-04-18 13:49:40.608476] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.864 [2024-04-18 13:49:40.608500] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.864 [2024-04-18 13:49:40.608515] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.864 [2024-04-18 13:49:40.612062] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.864 [2024-04-18 13:49:40.621286] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.864 [2024-04-18 13:49:40.621742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.621984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.864 [2024-04-18 13:49:40.622034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.864 [2024-04-18 13:49:40.622051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.864 [2024-04-18 13:49:40.622310] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.865 [2024-04-18 13:49:40.622552] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.865 [2024-04-18 13:49:40.622576] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.865 [2024-04-18 13:49:40.622591] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.865 [2024-04-18 13:49:40.626131] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.865 [2024-04-18 13:49:40.635135] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.865 [2024-04-18 13:49:40.635577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.865 [2024-04-18 13:49:40.635776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.865 [2024-04-18 13:49:40.635824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.865 [2024-04-18 13:49:40.635841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.865 [2024-04-18 13:49:40.636087] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.865 [2024-04-18 13:49:40.636339] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.865 [2024-04-18 13:49:40.636364] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.865 [2024-04-18 13:49:40.636379] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.865 [2024-04-18 13:49:40.639921] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.865 [2024-04-18 13:49:40.649117] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.865 [2024-04-18 13:49:40.649585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.865 [2024-04-18 13:49:40.649847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.865 [2024-04-18 13:49:40.649876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.865 [2024-04-18 13:49:40.649893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.865 [2024-04-18 13:49:40.650129] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.865 [2024-04-18 13:49:40.650379] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.865 [2024-04-18 13:49:40.650404] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.865 [2024-04-18 13:49:40.650419] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.865 [2024-04-18 13:49:40.653960] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.865 [2024-04-18 13:49:40.662963] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.865 [2024-04-18 13:49:40.663503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.865 [2024-04-18 13:49:40.663673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.865 [2024-04-18 13:49:40.663723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:37.865 [2024-04-18 13:49:40.663740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:37.865 [2024-04-18 13:49:40.663977] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:37.865 [2024-04-18 13:49:40.664228] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.865 [2024-04-18 13:49:40.664252] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.865 [2024-04-18 13:49:40.664268] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.865 [2024-04-18 13:49:40.667808] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.124 [2024-04-18 13:49:40.676804] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.124 [2024-04-18 13:49:40.677252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.124 [2024-04-18 13:49:40.677483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.124 [2024-04-18 13:49:40.677547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.124 [2024-04-18 13:49:40.677564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.124 [2024-04-18 13:49:40.677810] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.124 [2024-04-18 13:49:40.678051] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.124 [2024-04-18 13:49:40.678087] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.124 [2024-04-18 13:49:40.678102] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.124 [2024-04-18 13:49:40.681657] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.124 [2024-04-18 13:49:40.690579] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.124 [2024-04-18 13:49:40.690947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.124 [2024-04-18 13:49:40.691125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.124 [2024-04-18 13:49:40.691154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.124 [2024-04-18 13:49:40.691172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.124 [2024-04-18 13:49:40.691400] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.124 [2024-04-18 13:49:40.691671] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.124 [2024-04-18 13:49:40.691695] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.124 [2024-04-18 13:49:40.691710] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.124 [2024-04-18 13:49:40.695274] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.124 [2024-04-18 13:49:40.704494] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.124 [2024-04-18 13:49:40.704935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.705123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.705152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.125 [2024-04-18 13:49:40.705169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.125 [2024-04-18 13:49:40.705405] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.125 [2024-04-18 13:49:40.705656] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.125 [2024-04-18 13:49:40.705679] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.125 [2024-04-18 13:49:40.705695] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.125 [2024-04-18 13:49:40.709266] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.125 [2024-04-18 13:49:40.718483] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.125 [2024-04-18 13:49:40.718946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.719130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.719167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.125 [2024-04-18 13:49:40.719195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.125 [2024-04-18 13:49:40.719412] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.125 [2024-04-18 13:49:40.719663] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.125 [2024-04-18 13:49:40.719687] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.125 [2024-04-18 13:49:40.719702] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.125 [2024-04-18 13:49:40.723246] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.125 [2024-04-18 13:49:40.732389] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.125 [2024-04-18 13:49:40.732812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.732988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.733038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.125 [2024-04-18 13:49:40.733055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.125 [2024-04-18 13:49:40.733301] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.125 [2024-04-18 13:49:40.733547] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.125 [2024-04-18 13:49:40.733570] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.125 [2024-04-18 13:49:40.733586] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.125 [2024-04-18 13:49:40.737137] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.125 [2024-04-18 13:49:40.746346] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.125 [2024-04-18 13:49:40.746776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.747012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.747060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.125 [2024-04-18 13:49:40.747083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.125 [2024-04-18 13:49:40.747331] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.125 [2024-04-18 13:49:40.747573] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.125 [2024-04-18 13:49:40.747596] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.125 [2024-04-18 13:49:40.747612] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.125 [2024-04-18 13:49:40.751151] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.125 [2024-04-18 13:49:40.760346] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.125 [2024-04-18 13:49:40.760840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.761049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.761101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.125 [2024-04-18 13:49:40.761118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.125 [2024-04-18 13:49:40.761367] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.125 [2024-04-18 13:49:40.761609] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.125 [2024-04-18 13:49:40.761632] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.125 [2024-04-18 13:49:40.761648] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.125 [2024-04-18 13:49:40.765197] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.125 [2024-04-18 13:49:40.774191] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.125 [2024-04-18 13:49:40.774615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.774849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.774898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.125 [2024-04-18 13:49:40.774916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.125 [2024-04-18 13:49:40.775152] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.125 [2024-04-18 13:49:40.775402] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.125 [2024-04-18 13:49:40.775427] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.125 [2024-04-18 13:49:40.775442] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.125 [2024-04-18 13:49:40.778980] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.125 [2024-04-18 13:49:40.788197] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.125 [2024-04-18 13:49:40.788654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.788832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.788881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.125 [2024-04-18 13:49:40.788898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.125 [2024-04-18 13:49:40.789142] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.125 [2024-04-18 13:49:40.789393] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.125 [2024-04-18 13:49:40.789417] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.125 [2024-04-18 13:49:40.789432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.125 [2024-04-18 13:49:40.792972] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.125 [2024-04-18 13:49:40.802160] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.125 [2024-04-18 13:49:40.802570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.802748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.802797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.125 [2024-04-18 13:49:40.802814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.125 [2024-04-18 13:49:40.803055] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.125 [2024-04-18 13:49:40.803306] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.125 [2024-04-18 13:49:40.803331] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.125 [2024-04-18 13:49:40.803346] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.125 [2024-04-18 13:49:40.806884] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.125 [2024-04-18 13:49:40.816075] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.125 [2024-04-18 13:49:40.816581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.816741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.816791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.125 [2024-04-18 13:49:40.816808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.125 [2024-04-18 13:49:40.817045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.125 [2024-04-18 13:49:40.817298] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.125 [2024-04-18 13:49:40.817322] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.125 [2024-04-18 13:49:40.817337] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.125 [2024-04-18 13:49:40.820875] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.125 [2024-04-18 13:49:40.830073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.125 [2024-04-18 13:49:40.830481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.830631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.125 [2024-04-18 13:49:40.830681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.125 [2024-04-18 13:49:40.830699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.125 [2024-04-18 13:49:40.830935] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.126 [2024-04-18 13:49:40.831191] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.126 [2024-04-18 13:49:40.831215] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.126 [2024-04-18 13:49:40.831231] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.126 [2024-04-18 13:49:40.834777] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.126 [2024-04-18 13:49:40.843969] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.126 [2024-04-18 13:49:40.844473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.126 [2024-04-18 13:49:40.844634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.126 [2024-04-18 13:49:40.844669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.126 [2024-04-18 13:49:40.844686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.126 [2024-04-18 13:49:40.844923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.126 [2024-04-18 13:49:40.845163] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.126 [2024-04-18 13:49:40.845196] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.126 [2024-04-18 13:49:40.845213] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.126 [2024-04-18 13:49:40.848752] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.126 [2024-04-18 13:49:40.857960] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.126 [2024-04-18 13:49:40.858420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.126 [2024-04-18 13:49:40.858598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.126 [2024-04-18 13:49:40.858648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.126 [2024-04-18 13:49:40.858665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.126 [2024-04-18 13:49:40.858901] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.126 [2024-04-18 13:49:40.859141] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.126 [2024-04-18 13:49:40.859164] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.126 [2024-04-18 13:49:40.859190] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.126 [2024-04-18 13:49:40.862734] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.126 [2024-04-18 13:49:40.871941] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.126 [2024-04-18 13:49:40.872361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.126 [2024-04-18 13:49:40.872612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.126 [2024-04-18 13:49:40.872641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.126 [2024-04-18 13:49:40.872659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.126 [2024-04-18 13:49:40.872896] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.126 [2024-04-18 13:49:40.873137] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.126 [2024-04-18 13:49:40.873167] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.126 [2024-04-18 13:49:40.873196] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.126 [2024-04-18 13:49:40.876743] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.126 [2024-04-18 13:49:40.885959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.126 [2024-04-18 13:49:40.886396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.126 [2024-04-18 13:49:40.886634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.126 [2024-04-18 13:49:40.886681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.126 [2024-04-18 13:49:40.886699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.126 [2024-04-18 13:49:40.886935] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.126 [2024-04-18 13:49:40.887186] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.126 [2024-04-18 13:49:40.887222] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.126 [2024-04-18 13:49:40.887238] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.126 [2024-04-18 13:49:40.890789] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.126 [2024-04-18 13:49:40.899788] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.126 [2024-04-18 13:49:40.900312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.126 [2024-04-18 13:49:40.900566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.126 [2024-04-18 13:49:40.900616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.126 [2024-04-18 13:49:40.900633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.126 [2024-04-18 13:49:40.900871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.126 [2024-04-18 13:49:40.901112] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.126 [2024-04-18 13:49:40.901137] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.126 [2024-04-18 13:49:40.901152] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.126 [2024-04-18 13:49:40.904706] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.126 [2024-04-18 13:49:40.913703] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.126 [2024-04-18 13:49:40.914186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.126 [2024-04-18 13:49:40.914431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.126 [2024-04-18 13:49:40.914461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.126 [2024-04-18 13:49:40.914479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.126 [2024-04-18 13:49:40.914717] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.126 [2024-04-18 13:49:40.914959] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.126 [2024-04-18 13:49:40.914984] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.126 [2024-04-18 13:49:40.915004] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.126 [2024-04-18 13:49:40.918560] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.126 [2024-04-18 13:49:40.927553] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.126 [2024-04-18 13:49:40.928051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.126 [2024-04-18 13:49:40.928348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.126 [2024-04-18 13:49:40.928389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.126 [2024-04-18 13:49:40.928406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.126 [2024-04-18 13:49:40.928644] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.126 [2024-04-18 13:49:40.928883] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.126 [2024-04-18 13:49:40.928908] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.126 [2024-04-18 13:49:40.928923] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.391 [2024-04-18 13:49:40.932481] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.391 [2024-04-18 13:49:40.941479] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.391 [2024-04-18 13:49:40.942004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.391 [2024-04-18 13:49:40.942324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.391 [2024-04-18 13:49:40.942375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.391 [2024-04-18 13:49:40.942393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.391 [2024-04-18 13:49:40.942629] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.391 [2024-04-18 13:49:40.942869] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.391 [2024-04-18 13:49:40.942894] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.391 [2024-04-18 13:49:40.942909] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.391 [2024-04-18 13:49:40.946463] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.391 [2024-04-18 13:49:40.955461] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.391 [2024-04-18 13:49:40.955981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.391 [2024-04-18 13:49:40.956227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.391 [2024-04-18 13:49:40.956257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.391 [2024-04-18 13:49:40.956274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.391 [2024-04-18 13:49:40.956511] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.391 [2024-04-18 13:49:40.956752] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.391 [2024-04-18 13:49:40.956777] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.391 [2024-04-18 13:49:40.956792] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.391 [2024-04-18 13:49:40.960353] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.391 [2024-04-18 13:49:40.969354] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.391 [2024-04-18 13:49:40.969858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.391 [2024-04-18 13:49:40.970207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.391 [2024-04-18 13:49:40.970237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.391 [2024-04-18 13:49:40.970255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.391 [2024-04-18 13:49:40.970491] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.391 [2024-04-18 13:49:40.970731] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.391 [2024-04-18 13:49:40.970756] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.391 [2024-04-18 13:49:40.970772] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.391 [2024-04-18 13:49:40.974324] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.391 [2024-04-18 13:49:40.983254] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.391 [2024-04-18 13:49:40.983728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.391 [2024-04-18 13:49:40.983900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.391 [2024-04-18 13:49:40.983948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.391 [2024-04-18 13:49:40.983966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.391 [2024-04-18 13:49:40.984216] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.391 [2024-04-18 13:49:40.984458] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.391 [2024-04-18 13:49:40.984484] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.391 [2024-04-18 13:49:40.984500] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.391 [2024-04-18 13:49:40.988044] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.391 [2024-04-18 13:49:40.997265] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.391 [2024-04-18 13:49:40.997789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.391 [2024-04-18 13:49:40.997999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.391 [2024-04-18 13:49:40.998047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.391 [2024-04-18 13:49:40.998065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.391 [2024-04-18 13:49:40.998316] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.391 [2024-04-18 13:49:40.998558] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.391 [2024-04-18 13:49:40.998583] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.391 [2024-04-18 13:49:40.998599] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.391 [2024-04-18 13:49:41.002145] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.391 [2024-04-18 13:49:41.011151] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.391 [2024-04-18 13:49:41.011658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.391 [2024-04-18 13:49:41.011878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.391 [2024-04-18 13:49:41.011928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.391 [2024-04-18 13:49:41.011946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.391 [2024-04-18 13:49:41.012195] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.391 [2024-04-18 13:49:41.012437] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.391 [2024-04-18 13:49:41.012462] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.391 [2024-04-18 13:49:41.012478] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.391 [2024-04-18 13:49:41.016023] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.391 [2024-04-18 13:49:41.025025] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.391 [2024-04-18 13:49:41.025579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.391 [2024-04-18 13:49:41.025851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.391 [2024-04-18 13:49:41.025900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.391 [2024-04-18 13:49:41.025918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.391 [2024-04-18 13:49:41.026155] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.391 [2024-04-18 13:49:41.026409] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.391 [2024-04-18 13:49:41.026435] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.391 [2024-04-18 13:49:41.026451] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.391 [2024-04-18 13:49:41.029997] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.391 [2024-04-18 13:49:41.039224] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.391 [2024-04-18 13:49:41.039747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.391 [2024-04-18 13:49:41.040053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.391 [2024-04-18 13:49:41.040103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.391 [2024-04-18 13:49:41.040121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.391 [2024-04-18 13:49:41.040373] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.392 [2024-04-18 13:49:41.040613] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.392 [2024-04-18 13:49:41.040638] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.392 [2024-04-18 13:49:41.040654] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.392 [2024-04-18 13:49:41.044207] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.392 [2024-04-18 13:49:41.053200] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.392 [2024-04-18 13:49:41.053683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.053988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.054038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.392 [2024-04-18 13:49:41.054056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.392 [2024-04-18 13:49:41.054307] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.392 [2024-04-18 13:49:41.054548] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.392 [2024-04-18 13:49:41.054572] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.392 [2024-04-18 13:49:41.054588] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.392 [2024-04-18 13:49:41.058134] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.392 [2024-04-18 13:49:41.067133] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.392 [2024-04-18 13:49:41.067617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.067884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.067933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.392 [2024-04-18 13:49:41.067951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.392 [2024-04-18 13:49:41.068201] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.392 [2024-04-18 13:49:41.068443] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.392 [2024-04-18 13:49:41.068468] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.392 [2024-04-18 13:49:41.068484] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.392 [2024-04-18 13:49:41.072025] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.392 [2024-04-18 13:49:41.081015] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.392 [2024-04-18 13:49:41.081517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.081826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.081877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.392 [2024-04-18 13:49:41.081894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.392 [2024-04-18 13:49:41.082130] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.392 [2024-04-18 13:49:41.082386] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.392 [2024-04-18 13:49:41.082412] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.392 [2024-04-18 13:49:41.082428] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.392 [2024-04-18 13:49:41.085972] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.392 [2024-04-18 13:49:41.094964] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.392 [2024-04-18 13:49:41.095518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.095835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.095883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.392 [2024-04-18 13:49:41.095901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.392 [2024-04-18 13:49:41.096137] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.392 [2024-04-18 13:49:41.096391] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.392 [2024-04-18 13:49:41.096416] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.392 [2024-04-18 13:49:41.096432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.392 [2024-04-18 13:49:41.099977] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.392 [2024-04-18 13:49:41.108974] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.392 [2024-04-18 13:49:41.109500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.109828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.109876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.392 [2024-04-18 13:49:41.109894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.392 [2024-04-18 13:49:41.110130] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.392 [2024-04-18 13:49:41.110383] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.392 [2024-04-18 13:49:41.110409] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.392 [2024-04-18 13:49:41.110425] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.392 [2024-04-18 13:49:41.113971] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.392 [2024-04-18 13:49:41.122967] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.392 [2024-04-18 13:49:41.123504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.123811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.123864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.392 [2024-04-18 13:49:41.123882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.392 [2024-04-18 13:49:41.124119] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.392 [2024-04-18 13:49:41.124374] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.392 [2024-04-18 13:49:41.124399] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.392 [2024-04-18 13:49:41.124415] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.392 [2024-04-18 13:49:41.127959] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.392 [2024-04-18 13:49:41.136954] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.392 [2024-04-18 13:49:41.137482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.137724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.137772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.392 [2024-04-18 13:49:41.137795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.392 [2024-04-18 13:49:41.138032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.392 [2024-04-18 13:49:41.138288] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.392 [2024-04-18 13:49:41.138314] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.392 [2024-04-18 13:49:41.138330] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.392 [2024-04-18 13:49:41.141873] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.392 [2024-04-18 13:49:41.150868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.392 [2024-04-18 13:49:41.151394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.151712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.151761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.392 [2024-04-18 13:49:41.151778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.392 [2024-04-18 13:49:41.152014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.392 [2024-04-18 13:49:41.152267] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.392 [2024-04-18 13:49:41.152293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.392 [2024-04-18 13:49:41.152308] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.392 [2024-04-18 13:49:41.155853] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.392 [2024-04-18 13:49:41.164847] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.392 [2024-04-18 13:49:41.165384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.165693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.392 [2024-04-18 13:49:41.165742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.392 [2024-04-18 13:49:41.165760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.392 [2024-04-18 13:49:41.165997] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.392 [2024-04-18 13:49:41.166250] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.392 [2024-04-18 13:49:41.166276] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.393 [2024-04-18 13:49:41.166292] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.393 [2024-04-18 13:49:41.169836] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.393 [2024-04-18 13:49:41.178831] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.393 [2024-04-18 13:49:41.179353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.393 [2024-04-18 13:49:41.179631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.393 [2024-04-18 13:49:41.179679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.393 [2024-04-18 13:49:41.179696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.393 [2024-04-18 13:49:41.179939] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.393 [2024-04-18 13:49:41.180206] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.393 [2024-04-18 13:49:41.180231] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.393 [2024-04-18 13:49:41.180248] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.393 [2024-04-18 13:49:41.183790] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.700 [2024-04-18 13:49:41.192792] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.700 [2024-04-18 13:49:41.193275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.700 [2024-04-18 13:49:41.193544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.700 [2024-04-18 13:49:41.193596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.700 [2024-04-18 13:49:41.193614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.700 [2024-04-18 13:49:41.193852] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.700 [2024-04-18 13:49:41.194092] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.700 [2024-04-18 13:49:41.194117] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.700 [2024-04-18 13:49:41.194133] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.700 [2024-04-18 13:49:41.197722] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.700 [2024-04-18 13:49:41.206729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.700 [2024-04-18 13:49:41.207253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.700 [2024-04-18 13:49:41.207584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.700 [2024-04-18 13:49:41.207633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.700 [2024-04-18 13:49:41.207651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.700 [2024-04-18 13:49:41.207889] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.700 [2024-04-18 13:49:41.208130] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.700 [2024-04-18 13:49:41.208154] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.700 [2024-04-18 13:49:41.208171] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.700 [2024-04-18 13:49:41.211726] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.700 [2024-04-18 13:49:41.220726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.700 [2024-04-18 13:49:41.221251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.700 [2024-04-18 13:49:41.221515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.700 [2024-04-18 13:49:41.221566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.700 [2024-04-18 13:49:41.221583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.700 [2024-04-18 13:49:41.221821] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.700 [2024-04-18 13:49:41.222069] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.700 [2024-04-18 13:49:41.222094] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.700 [2024-04-18 13:49:41.222110] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.700 [2024-04-18 13:49:41.225667] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.700 [2024-04-18 13:49:41.234741] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.700 [2024-04-18 13:49:41.235273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.700 [2024-04-18 13:49:41.235576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.700 [2024-04-18 13:49:41.235623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.700 [2024-04-18 13:49:41.235641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.700 [2024-04-18 13:49:41.235878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.700 [2024-04-18 13:49:41.236120] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.700 [2024-04-18 13:49:41.236144] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.700 [2024-04-18 13:49:41.236160] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.700 [2024-04-18 13:49:41.239591] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.700 [2024-04-18 13:49:41.248665] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.700 [2024-04-18 13:49:41.249194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.700 [2024-04-18 13:49:41.249501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.700 [2024-04-18 13:49:41.249531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.700 [2024-04-18 13:49:41.249548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.700 [2024-04-18 13:49:41.249786] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.700 [2024-04-18 13:49:41.250028] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.700 [2024-04-18 13:49:41.250053] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.700 [2024-04-18 13:49:41.250069] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.700 [2024-04-18 13:49:41.253432] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.700 [2024-04-18 13:49:41.262526] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.700 [2024-04-18 13:49:41.263078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.700 [2024-04-18 13:49:41.263304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.700 [2024-04-18 13:49:41.263337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.700 [2024-04-18 13:49:41.263353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.700 [2024-04-18 13:49:41.263601] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.700 [2024-04-18 13:49:41.263848] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.700 [2024-04-18 13:49:41.263872] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.700 [2024-04-18 13:49:41.263888] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.700 [2024-04-18 13:49:41.267421] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.700 [2024-04-18 13:49:41.276552] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.700 [2024-04-18 13:49:41.276966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.700 [2024-04-18 13:49:41.277145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.700 [2024-04-18 13:49:41.277173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.700 [2024-04-18 13:49:41.277201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.700 [2024-04-18 13:49:41.277431] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.700 [2024-04-18 13:49:41.277695] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.700 [2024-04-18 13:49:41.277720] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.700 [2024-04-18 13:49:41.277736] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.700 [2024-04-18 13:49:41.281330] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.700 [2024-04-18 13:49:41.290387] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.700 [2024-04-18 13:49:41.290793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.700 [2024-04-18 13:49:41.290993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.700 [2024-04-18 13:49:41.291043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.700 [2024-04-18 13:49:41.291061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.701 [2024-04-18 13:49:41.291309] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.701 [2024-04-18 13:49:41.291559] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.701 [2024-04-18 13:49:41.291583] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.701 [2024-04-18 13:49:41.291598] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.701 [2024-04-18 13:49:41.295143] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.701 [2024-04-18 13:49:41.304371] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.701 [2024-04-18 13:49:41.304801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.701 [2024-04-18 13:49:41.305010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.701 [2024-04-18 13:49:41.305064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.701 [2024-04-18 13:49:41.305081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.701 [2024-04-18 13:49:41.305329] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.701 [2024-04-18 13:49:41.305570] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.701 [2024-04-18 13:49:41.305594] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.701 [2024-04-18 13:49:41.305615] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.701 [2024-04-18 13:49:41.309159] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.701 [2024-04-18 13:49:41.318392] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.701 [2024-04-18 13:49:41.318863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.701 [2024-04-18 13:49:41.319057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.701 [2024-04-18 13:49:41.319107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.701 [2024-04-18 13:49:41.319125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.701 [2024-04-18 13:49:41.319372] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.701 [2024-04-18 13:49:41.319613] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.701 [2024-04-18 13:49:41.319637] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.701 [2024-04-18 13:49:41.319653] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.701 [2024-04-18 13:49:41.323208] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.701 [2024-04-18 13:49:41.332348] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.701 [2024-04-18 13:49:41.332773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.701 [2024-04-18 13:49:41.333021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.701 [2024-04-18 13:49:41.333071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.701 [2024-04-18 13:49:41.333089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.701 [2024-04-18 13:49:41.333340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.701 [2024-04-18 13:49:41.333598] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.701 [2024-04-18 13:49:41.333618] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.701 [2024-04-18 13:49:41.333630] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.701 [2024-04-18 13:49:41.337183] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.701 [2024-04-18 13:49:41.346269] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.701 [2024-04-18 13:49:41.346761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.701 [2024-04-18 13:49:41.346989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.701 [2024-04-18 13:49:41.347043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.701 [2024-04-18 13:49:41.347061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.701 [2024-04-18 13:49:41.347316] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.701 [2024-04-18 13:49:41.347562] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.701 [2024-04-18 13:49:41.347582] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.701 [2024-04-18 13:49:41.347599] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.701 [2024-04-18 13:49:41.351189] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.701 [2024-04-18 13:49:41.360251] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.701 [2024-04-18 13:49:41.360750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.701 [2024-04-18 13:49:41.360963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.701 [2024-04-18 13:49:41.361018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.701 [2024-04-18 13:49:41.361036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.701 [2024-04-18 13:49:41.361295] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.701 [2024-04-18 13:49:41.361527] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.701 [2024-04-18 13:49:41.361563] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.701 [2024-04-18 13:49:41.361577] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.701 [2024-04-18 13:49:41.365087] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.701 [2024-04-18 13:49:41.374158] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.701 [2024-04-18 13:49:41.374702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.701 [2024-04-18 13:49:41.375011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.701 [2024-04-18 13:49:41.375061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.701 [2024-04-18 13:49:41.375079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.701 [2024-04-18 13:49:41.375333] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.701 [2024-04-18 13:49:41.375573] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.701 [2024-04-18 13:49:41.375598] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.701 [2024-04-18 13:49:41.375614] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.701 [2024-04-18 13:49:41.379158] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.701 [2024-04-18 13:49:41.388161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.701 [2024-04-18 13:49:41.388679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.701 [2024-04-18 13:49:41.388924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.701 [2024-04-18 13:49:41.388974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.701 [2024-04-18 13:49:41.388992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.701 [2024-04-18 13:49:41.389253] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.701 [2024-04-18 13:49:41.389495] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.701 [2024-04-18 13:49:41.389520] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.701 [2024-04-18 13:49:41.389536] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.701 [2024-04-18 13:49:41.393085] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.701 [2024-04-18 13:49:41.402085] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.701 [2024-04-18 13:49:41.402579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.701 [2024-04-18 13:49:41.402774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.701 [2024-04-18 13:49:41.402823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.701 [2024-04-18 13:49:41.402841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.701 [2024-04-18 13:49:41.403077] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.701 [2024-04-18 13:49:41.403332] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.701 [2024-04-18 13:49:41.403358] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.701 [2024-04-18 13:49:41.403375] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.702 [2024-04-18 13:49:41.406922] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.702 [2024-04-18 13:49:41.415913] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.702 [2024-04-18 13:49:41.416463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.702 [2024-04-18 13:49:41.416748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.702 [2024-04-18 13:49:41.416796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.702 [2024-04-18 13:49:41.416814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.702 [2024-04-18 13:49:41.417059] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.702 [2024-04-18 13:49:41.417315] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.702 [2024-04-18 13:49:41.417341] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.702 [2024-04-18 13:49:41.417357] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.702 [2024-04-18 13:49:41.420900] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.702 [2024-04-18 13:49:41.429932] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.702 [2024-04-18 13:49:41.430355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.702 [2024-04-18 13:49:41.430537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.702 [2024-04-18 13:49:41.430597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.702 [2024-04-18 13:49:41.430615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.702 [2024-04-18 13:49:41.430851] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.702 [2024-04-18 13:49:41.431093] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.702 [2024-04-18 13:49:41.431117] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.702 [2024-04-18 13:49:41.431133] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.702 [2024-04-18 13:49:41.434691] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.702 [2024-04-18 13:49:41.443888] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.702 [2024-04-18 13:49:41.444295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.702 [2024-04-18 13:49:41.444504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.702 [2024-04-18 13:49:41.444568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.702 [2024-04-18 13:49:41.444586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.702 [2024-04-18 13:49:41.444822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.702 [2024-04-18 13:49:41.445063] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.702 [2024-04-18 13:49:41.445087] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.702 [2024-04-18 13:49:41.445103] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.702 [2024-04-18 13:49:41.448649] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.702 [2024-04-18 13:49:41.457853] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.702 [2024-04-18 13:49:41.458268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.702 [2024-04-18 13:49:41.458431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.702 [2024-04-18 13:49:41.458460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.702 [2024-04-18 13:49:41.458478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.702 [2024-04-18 13:49:41.458715] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.702 [2024-04-18 13:49:41.458955] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.702 [2024-04-18 13:49:41.458979] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.702 [2024-04-18 13:49:41.458995] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.702 [2024-04-18 13:49:41.462538] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.702 [2024-04-18 13:49:41.471740] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.702 [2024-04-18 13:49:41.472257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.702 [2024-04-18 13:49:41.472421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.702 [2024-04-18 13:49:41.472473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.702 [2024-04-18 13:49:41.472501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.702 [2024-04-18 13:49:41.472738] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.702 [2024-04-18 13:49:41.472979] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.702 [2024-04-18 13:49:41.473004] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.702 [2024-04-18 13:49:41.473020] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.702 [2024-04-18 13:49:41.476566] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.962 [2024-04-18 13:49:41.485572] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.962 [2024-04-18 13:49:41.486074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.962 [2024-04-18 13:49:41.486304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.962 [2024-04-18 13:49:41.486334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.962 [2024-04-18 13:49:41.486352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.962 [2024-04-18 13:49:41.486588] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.962 [2024-04-18 13:49:41.486829] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.962 [2024-04-18 13:49:41.486852] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.962 [2024-04-18 13:49:41.486867] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.962 [2024-04-18 13:49:41.490457] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.962 [2024-04-18 13:49:41.499487] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.962 [2024-04-18 13:49:41.500036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.962 [2024-04-18 13:49:41.500303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.962 [2024-04-18 13:49:41.500331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.962 [2024-04-18 13:49:41.500348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.962 [2024-04-18 13:49:41.500584] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.962 [2024-04-18 13:49:41.500826] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.962 [2024-04-18 13:49:41.500851] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.962 [2024-04-18 13:49:41.500867] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.962 [2024-04-18 13:49:41.504425] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.962 [2024-04-18 13:49:41.513444] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.962 [2024-04-18 13:49:41.513884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.962 [2024-04-18 13:49:41.514041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.962 [2024-04-18 13:49:41.514070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.962 [2024-04-18 13:49:41.514088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.962 [2024-04-18 13:49:41.514334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.962 [2024-04-18 13:49:41.514576] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.962 [2024-04-18 13:49:41.514599] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.962 [2024-04-18 13:49:41.514615] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.962 [2024-04-18 13:49:41.518155] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.962 [2024-04-18 13:49:41.527357] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.962 [2024-04-18 13:49:41.527799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.962 [2024-04-18 13:49:41.528023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.962 [2024-04-18 13:49:41.528075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.962 [2024-04-18 13:49:41.528098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.962 [2024-04-18 13:49:41.528348] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.962 [2024-04-18 13:49:41.528590] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.962 [2024-04-18 13:49:41.528613] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.962 [2024-04-18 13:49:41.528629] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.962 [2024-04-18 13:49:41.532168] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.962 [2024-04-18 13:49:41.541157] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.962 [2024-04-18 13:49:41.541584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.962 [2024-04-18 13:49:41.541821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.962 [2024-04-18 13:49:41.541871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.962 [2024-04-18 13:49:41.541890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.962 [2024-04-18 13:49:41.542127] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.962 [2024-04-18 13:49:41.542376] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.962 [2024-04-18 13:49:41.542401] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.962 [2024-04-18 13:49:41.542416] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.962 [2024-04-18 13:49:41.545960] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.962 [2024-04-18 13:49:41.555158] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.962 [2024-04-18 13:49:41.555703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.962 [2024-04-18 13:49:41.556021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.962 [2024-04-18 13:49:41.556070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.962 [2024-04-18 13:49:41.556088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.962 [2024-04-18 13:49:41.556339] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.962 [2024-04-18 13:49:41.556581] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.962 [2024-04-18 13:49:41.556605] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.962 [2024-04-18 13:49:41.556621] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.962 [2024-04-18 13:49:41.560168] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.962 [2024-04-18 13:49:41.568969] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.962 [2024-04-18 13:49:41.569421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.962 [2024-04-18 13:49:41.569695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.962 [2024-04-18 13:49:41.569744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.962 [2024-04-18 13:49:41.569767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.962 [2024-04-18 13:49:41.570005] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.962 [2024-04-18 13:49:41.570263] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.963 [2024-04-18 13:49:41.570288] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.963 [2024-04-18 13:49:41.570303] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.963 [2024-04-18 13:49:41.573845] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.963 [2024-04-18 13:49:41.582838] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.963 [2024-04-18 13:49:41.583311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.583540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.583589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.963 [2024-04-18 13:49:41.583607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.963 [2024-04-18 13:49:41.583844] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.963 [2024-04-18 13:49:41.584086] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.963 [2024-04-18 13:49:41.584110] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.963 [2024-04-18 13:49:41.584125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.963 [2024-04-18 13:49:41.587680] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.963 [2024-04-18 13:49:41.596691] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.963 [2024-04-18 13:49:41.597148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.597389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.597418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.963 [2024-04-18 13:49:41.597436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.963 [2024-04-18 13:49:41.597672] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.963 [2024-04-18 13:49:41.597914] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.963 [2024-04-18 13:49:41.597939] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.963 [2024-04-18 13:49:41.597955] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.963 [2024-04-18 13:49:41.601518] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.963 [2024-04-18 13:49:41.610523] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.963 [2024-04-18 13:49:41.610968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.611299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.611331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.963 [2024-04-18 13:49:41.611349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.963 [2024-04-18 13:49:41.611592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.963 [2024-04-18 13:49:41.611834] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.963 [2024-04-18 13:49:41.611858] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.963 [2024-04-18 13:49:41.611874] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.963 [2024-04-18 13:49:41.615433] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.963 [2024-04-18 13:49:41.624432] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.963 [2024-04-18 13:49:41.624924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.625239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.625270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.963 [2024-04-18 13:49:41.625288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.963 [2024-04-18 13:49:41.625525] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.963 [2024-04-18 13:49:41.625766] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.963 [2024-04-18 13:49:41.625791] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.963 [2024-04-18 13:49:41.625807] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.963 [2024-04-18 13:49:41.629363] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.963 [2024-04-18 13:49:41.638360] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.963 [2024-04-18 13:49:41.638856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.639205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.639235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.963 [2024-04-18 13:49:41.639253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.963 [2024-04-18 13:49:41.639490] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.963 [2024-04-18 13:49:41.639732] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.963 [2024-04-18 13:49:41.639757] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.963 [2024-04-18 13:49:41.639773] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.963 [2024-04-18 13:49:41.643322] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.963 [2024-04-18 13:49:41.652317] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.963 [2024-04-18 13:49:41.652798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.653072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.653120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.963 [2024-04-18 13:49:41.653137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.963 [2024-04-18 13:49:41.653386] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.963 [2024-04-18 13:49:41.653635] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.963 [2024-04-18 13:49:41.653660] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.963 [2024-04-18 13:49:41.653676] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.963 [2024-04-18 13:49:41.657225] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.963 [2024-04-18 13:49:41.666214] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.963 [2024-04-18 13:49:41.666735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.667054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.667103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.963 [2024-04-18 13:49:41.667120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.963 [2024-04-18 13:49:41.667370] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.963 [2024-04-18 13:49:41.667611] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.963 [2024-04-18 13:49:41.667636] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.963 [2024-04-18 13:49:41.667652] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.963 [2024-04-18 13:49:41.671201] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.963 [2024-04-18 13:49:41.680202] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.963 [2024-04-18 13:49:41.680655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.680949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.680998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.963 [2024-04-18 13:49:41.681015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.963 [2024-04-18 13:49:41.681266] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.963 [2024-04-18 13:49:41.681507] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.963 [2024-04-18 13:49:41.681531] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.963 [2024-04-18 13:49:41.681547] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.963 [2024-04-18 13:49:41.685092] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.963 [2024-04-18 13:49:41.694102] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.963 [2024-04-18 13:49:41.694631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.694865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.963 [2024-04-18 13:49:41.694895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.963 [2024-04-18 13:49:41.694912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.963 [2024-04-18 13:49:41.695149] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.963 [2024-04-18 13:49:41.695398] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.963 [2024-04-18 13:49:41.695428] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.963 [2024-04-18 13:49:41.695446] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.963 [2024-04-18 13:49:41.698985] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.963 [2024-04-18 13:49:41.707995] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.964 [2024-04-18 13:49:41.708405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.964 [2024-04-18 13:49:41.708692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.964 [2024-04-18 13:49:41.708744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.964 [2024-04-18 13:49:41.708761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.964 [2024-04-18 13:49:41.708999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.964 [2024-04-18 13:49:41.709252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.964 [2024-04-18 13:49:41.709277] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.964 [2024-04-18 13:49:41.709293] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.964 [2024-04-18 13:49:41.712843] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.964 [2024-04-18 13:49:41.721850] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.964 [2024-04-18 13:49:41.722305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.964 [2024-04-18 13:49:41.722517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.964 [2024-04-18 13:49:41.722566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.964 [2024-04-18 13:49:41.722583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.964 [2024-04-18 13:49:41.722821] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.964 [2024-04-18 13:49:41.723063] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.964 [2024-04-18 13:49:41.723088] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.964 [2024-04-18 13:49:41.723105] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.964 [2024-04-18 13:49:41.726659] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.964 [2024-04-18 13:49:41.735678] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.964 [2024-04-18 13:49:41.736140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.964 [2024-04-18 13:49:41.736334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.964 [2024-04-18 13:49:41.736363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.964 [2024-04-18 13:49:41.736381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.964 [2024-04-18 13:49:41.736617] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.964 [2024-04-18 13:49:41.736858] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.964 [2024-04-18 13:49:41.736883] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.964 [2024-04-18 13:49:41.736903] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.964 [2024-04-18 13:49:41.740465] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.964 [2024-04-18 13:49:41.749670] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.964 [2024-04-18 13:49:41.750189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.964 [2024-04-18 13:49:41.750424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.964 [2024-04-18 13:49:41.750453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.964 [2024-04-18 13:49:41.750471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.964 [2024-04-18 13:49:41.750707] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.964 [2024-04-18 13:49:41.750949] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.964 [2024-04-18 13:49:41.750973] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.964 [2024-04-18 13:49:41.750989] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.964 [2024-04-18 13:49:41.754541] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.964 [2024-04-18 13:49:41.763535] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.964 [2024-04-18 13:49:41.764061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.964 [2024-04-18 13:49:41.764280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.964 [2024-04-18 13:49:41.764320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:38.964 [2024-04-18 13:49:41.764339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:38.964 [2024-04-18 13:49:41.764579] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:38.964 [2024-04-18 13:49:41.764821] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.964 [2024-04-18 13:49:41.764845] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.964 [2024-04-18 13:49:41.764861] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.223 [2024-04-18 13:49:41.768412] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.223 [2024-04-18 13:49:41.777413] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.223 [2024-04-18 13:49:41.777963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.223 [2024-04-18 13:49:41.778223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.223 [2024-04-18 13:49:41.778253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.223 [2024-04-18 13:49:41.778270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.223 [2024-04-18 13:49:41.778506] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.223 [2024-04-18 13:49:41.778748] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.223 [2024-04-18 13:49:41.778773] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.223 [2024-04-18 13:49:41.778789] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.223 [2024-04-18 13:49:41.782343] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.223 [2024-04-18 13:49:41.791345] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.223 [2024-04-18 13:49:41.791869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.223 [2024-04-18 13:49:41.792094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.223 [2024-04-18 13:49:41.792144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.223 [2024-04-18 13:49:41.792162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.223 [2024-04-18 13:49:41.792410] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.223 [2024-04-18 13:49:41.792652] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.223 [2024-04-18 13:49:41.792676] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.223 [2024-04-18 13:49:41.792692] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.223 [2024-04-18 13:49:41.796243] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.223 [2024-04-18 13:49:41.805235] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.223 [2024-04-18 13:49:41.805728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.223 [2024-04-18 13:49:41.805912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.223 [2024-04-18 13:49:41.805963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.223 [2024-04-18 13:49:41.805981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.223 [2024-04-18 13:49:41.806231] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.223 [2024-04-18 13:49:41.806474] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.223 [2024-04-18 13:49:41.806498] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.223 [2024-04-18 13:49:41.806514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.223 [2024-04-18 13:49:41.810056] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.223 [2024-04-18 13:49:41.819053] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.223 [2024-04-18 13:49:41.819562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.223 [2024-04-18 13:49:41.819778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.223 [2024-04-18 13:49:41.819837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.223 [2024-04-18 13:49:41.819856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.223 [2024-04-18 13:49:41.820093] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.223 [2024-04-18 13:49:41.820347] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.223 [2024-04-18 13:49:41.820372] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.223 [2024-04-18 13:49:41.820388] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.223 [2024-04-18 13:49:41.823931] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.223 [2024-04-18 13:49:41.832924] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.223 [2024-04-18 13:49:41.833434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.223 [2024-04-18 13:49:41.833755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.223 [2024-04-18 13:49:41.833804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.223 [2024-04-18 13:49:41.833822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.223 [2024-04-18 13:49:41.834059] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.223 [2024-04-18 13:49:41.834319] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.223 [2024-04-18 13:49:41.834345] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.223 [2024-04-18 13:49:41.834361] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.223 [2024-04-18 13:49:41.837906] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.223 [2024-04-18 13:49:41.846904] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.223 [2024-04-18 13:49:41.847412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.223 [2024-04-18 13:49:41.847700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.223 [2024-04-18 13:49:41.847748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.223 [2024-04-18 13:49:41.847766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.223 [2024-04-18 13:49:41.848003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.224 [2024-04-18 13:49:41.848258] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.224 [2024-04-18 13:49:41.848284] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.224 [2024-04-18 13:49:41.848300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.224 [2024-04-18 13:49:41.851843] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.224 [2024-04-18 13:49:41.860836] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.224 [2024-04-18 13:49:41.861293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.861552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.861597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.224 [2024-04-18 13:49:41.861615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.224 [2024-04-18 13:49:41.861862] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.224 [2024-04-18 13:49:41.862103] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.224 [2024-04-18 13:49:41.862127] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.224 [2024-04-18 13:49:41.862143] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.224 [2024-04-18 13:49:41.865698] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.224 [2024-04-18 13:49:41.874726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.224 [2024-04-18 13:49:41.875163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.875339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.875368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.224 [2024-04-18 13:49:41.875386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.224 [2024-04-18 13:49:41.875623] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.224 [2024-04-18 13:49:41.875864] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.224 [2024-04-18 13:49:41.875887] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.224 [2024-04-18 13:49:41.875903] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.224 [2024-04-18 13:49:41.879461] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.224 [2024-04-18 13:49:41.888542] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.224 [2024-04-18 13:49:41.888985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.889262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.889287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.224 [2024-04-18 13:49:41.889303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.224 [2024-04-18 13:49:41.889516] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.224 [2024-04-18 13:49:41.889773] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.224 [2024-04-18 13:49:41.889798] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.224 [2024-04-18 13:49:41.889813] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.224 [2024-04-18 13:49:41.893355] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.224 [2024-04-18 13:49:41.902538] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.224 [2024-04-18 13:49:41.902992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.903223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.903249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.224 [2024-04-18 13:49:41.903264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.224 [2024-04-18 13:49:41.903479] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.224 [2024-04-18 13:49:41.903731] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.224 [2024-04-18 13:49:41.903756] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.224 [2024-04-18 13:49:41.903772] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.224 [2024-04-18 13:49:41.907317] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.224 [2024-04-18 13:49:41.916484] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.224 [2024-04-18 13:49:41.916942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.917168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.917211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.224 [2024-04-18 13:49:41.917231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.224 [2024-04-18 13:49:41.917467] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.224 [2024-04-18 13:49:41.917708] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.224 [2024-04-18 13:49:41.917731] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.224 [2024-04-18 13:49:41.917747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.224 [2024-04-18 13:49:41.921300] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.224 [2024-04-18 13:49:41.930323] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.224 [2024-04-18 13:49:41.930809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.931081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.931125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.224 [2024-04-18 13:49:41.931143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.224 [2024-04-18 13:49:41.931389] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.224 [2024-04-18 13:49:41.931631] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.224 [2024-04-18 13:49:41.931655] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.224 [2024-04-18 13:49:41.931671] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.224 [2024-04-18 13:49:41.935232] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.224 [2024-04-18 13:49:41.943879] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.224 [2024-04-18 13:49:41.944354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.944652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.944679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.224 [2024-04-18 13:49:41.944695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.224 [2024-04-18 13:49:41.944908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.224 [2024-04-18 13:49:41.945125] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.224 [2024-04-18 13:49:41.945147] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.224 [2024-04-18 13:49:41.945162] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.224 [2024-04-18 13:49:41.948339] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.224 [2024-04-18 13:49:41.957223] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.224 [2024-04-18 13:49:41.957691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.957918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.957942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.224 [2024-04-18 13:49:41.957961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.224 [2024-04-18 13:49:41.958155] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.224 [2024-04-18 13:49:41.958385] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.224 [2024-04-18 13:49:41.958407] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.224 [2024-04-18 13:49:41.958420] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.224 [2024-04-18 13:49:41.961404] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.224 [2024-04-18 13:49:41.970531] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.224 [2024-04-18 13:49:41.970978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.971143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.224 [2024-04-18 13:49:41.971189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.224 [2024-04-18 13:49:41.971206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.224 [2024-04-18 13:49:41.971420] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.224 [2024-04-18 13:49:41.971646] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.224 [2024-04-18 13:49:41.971667] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.224 [2024-04-18 13:49:41.971680] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.224 [2024-04-18 13:49:41.974614] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.224 [2024-04-18 13:49:41.983698] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.225 [2024-04-18 13:49:41.984098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.225 [2024-04-18 13:49:41.984251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.225 [2024-04-18 13:49:41.984276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.225 [2024-04-18 13:49:41.984291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.225 [2024-04-18 13:49:41.984505] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.225 [2024-04-18 13:49:41.984712] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.225 [2024-04-18 13:49:41.984732] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.225 [2024-04-18 13:49:41.984746] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.225 [2024-04-18 13:49:41.987678] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.225 [2024-04-18 13:49:41.996863] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.225 [2024-04-18 13:49:41.997320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.225 [2024-04-18 13:49:41.997586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.225 [2024-04-18 13:49:41.997610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.225 [2024-04-18 13:49:41.997623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.225 [2024-04-18 13:49:41.997816] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.225 [2024-04-18 13:49:41.998007] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.225 [2024-04-18 13:49:41.998026] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.225 [2024-04-18 13:49:41.998038] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.225 [2024-04-18 13:49:42.001132] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.225 [2024-04-18 13:49:42.010139] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.225 [2024-04-18 13:49:42.010609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.225 [2024-04-18 13:49:42.010875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.225 [2024-04-18 13:49:42.010898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.225 [2024-04-18 13:49:42.010913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.225 [2024-04-18 13:49:42.011100] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.225 [2024-04-18 13:49:42.011341] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.225 [2024-04-18 13:49:42.011364] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.225 [2024-04-18 13:49:42.011378] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.225 [2024-04-18 13:49:42.014329] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.225 [2024-04-18 13:49:42.023588] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.225 [2024-04-18 13:49:42.024037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.225 [2024-04-18 13:49:42.024308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.225 [2024-04-18 13:49:42.024335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.225 [2024-04-18 13:49:42.024350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.225 [2024-04-18 13:49:42.024580] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.225 [2024-04-18 13:49:42.024772] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.225 [2024-04-18 13:49:42.024793] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.225 [2024-04-18 13:49:42.024806] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.225 [2024-04-18 13:49:42.028198] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.484 [2024-04-18 13:49:42.036936] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.484 [2024-04-18 13:49:42.037353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.484 [2024-04-18 13:49:42.037526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.484 [2024-04-18 13:49:42.037549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.484 [2024-04-18 13:49:42.037564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.484 [2024-04-18 13:49:42.037751] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.484 [2024-04-18 13:49:42.037948] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.484 [2024-04-18 13:49:42.037969] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.484 [2024-04-18 13:49:42.037982] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.484 [2024-04-18 13:49:42.040977] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.484 [2024-04-18 13:49:42.050207] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.484 [2024-04-18 13:49:42.050664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.484 [2024-04-18 13:49:42.050928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.484 [2024-04-18 13:49:42.050964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.484 [2024-04-18 13:49:42.050978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.484 [2024-04-18 13:49:42.051191] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.484 [2024-04-18 13:49:42.051421] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.484 [2024-04-18 13:49:42.051452] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.484 [2024-04-18 13:49:42.051480] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.484 [2024-04-18 13:49:42.054416] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.484 [2024-04-18 13:49:42.063414] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.484 [2024-04-18 13:49:42.063872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.484 [2024-04-18 13:49:42.064046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.484 [2024-04-18 13:49:42.064067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.484 [2024-04-18 13:49:42.064089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.484 [2024-04-18 13:49:42.064323] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.484 [2024-04-18 13:49:42.064553] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.484 [2024-04-18 13:49:42.064574] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.484 [2024-04-18 13:49:42.064586] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.484 [2024-04-18 13:49:42.067622] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.484 [2024-04-18 13:49:42.076673] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.484 [2024-04-18 13:49:42.077152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.484 [2024-04-18 13:49:42.077393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.484 [2024-04-18 13:49:42.077417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.484 [2024-04-18 13:49:42.077431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.484 [2024-04-18 13:49:42.077635] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.484 [2024-04-18 13:49:42.077827] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.484 [2024-04-18 13:49:42.077853] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.484 [2024-04-18 13:49:42.077867] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.484 [2024-04-18 13:49:42.080829] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.484 [2024-04-18 13:49:42.089835] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.484 [2024-04-18 13:49:42.090271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.484 [2024-04-18 13:49:42.090508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.484 [2024-04-18 13:49:42.090531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.484 [2024-04-18 13:49:42.090545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.484 [2024-04-18 13:49:42.090733] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.484 [2024-04-18 13:49:42.090925] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.484 [2024-04-18 13:49:42.090946] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.484 [2024-04-18 13:49:42.090959] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.484 [2024-04-18 13:49:42.093938] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.484 [2024-04-18 13:49:42.103038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.484 [2024-04-18 13:49:42.103500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.484 [2024-04-18 13:49:42.103621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.484 [2024-04-18 13:49:42.103644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.485 [2024-04-18 13:49:42.103658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.485 [2024-04-18 13:49:42.103845] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.485 [2024-04-18 13:49:42.104037] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.485 [2024-04-18 13:49:42.104058] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.485 [2024-04-18 13:49:42.104071] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.485 [2024-04-18 13:49:42.107040] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.485 [2024-04-18 13:49:42.116290] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.485 [2024-04-18 13:49:42.116787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.117047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.117070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.485 [2024-04-18 13:49:42.117094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.485 [2024-04-18 13:49:42.117330] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.485 [2024-04-18 13:49:42.117561] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.485 [2024-04-18 13:49:42.117582] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.485 [2024-04-18 13:49:42.117598] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.485 [2024-04-18 13:49:42.120529] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.485 [2024-04-18 13:49:42.129546] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.485 [2024-04-18 13:49:42.130050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.130322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.130348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.485 [2024-04-18 13:49:42.130363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.485 [2024-04-18 13:49:42.130590] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.485 [2024-04-18 13:49:42.130782] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.485 [2024-04-18 13:49:42.130802] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.485 [2024-04-18 13:49:42.130814] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.485 [2024-04-18 13:49:42.133742] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.485 [2024-04-18 13:49:42.142797] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.485 [2024-04-18 13:49:42.143241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.143395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.143418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.485 [2024-04-18 13:49:42.143433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.485 [2024-04-18 13:49:42.143636] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.485 [2024-04-18 13:49:42.143829] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.485 [2024-04-18 13:49:42.143849] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.485 [2024-04-18 13:49:42.143862] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.485 [2024-04-18 13:49:42.146832] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.485 [2024-04-18 13:49:42.156050] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.485 [2024-04-18 13:49:42.156505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.156723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.156746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.485 [2024-04-18 13:49:42.156760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.485 [2024-04-18 13:49:42.156947] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.485 [2024-04-18 13:49:42.157150] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.485 [2024-04-18 13:49:42.157170] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.485 [2024-04-18 13:49:42.157207] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.485 [2024-04-18 13:49:42.160194] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.485 [2024-04-18 13:49:42.169274] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.485 [2024-04-18 13:49:42.169744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.169984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.170008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.485 [2024-04-18 13:49:42.170023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.485 [2024-04-18 13:49:42.170254] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.485 [2024-04-18 13:49:42.170471] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.485 [2024-04-18 13:49:42.170493] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.485 [2024-04-18 13:49:42.170506] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.485 [2024-04-18 13:49:42.173434] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.485 [2024-04-18 13:49:42.182484] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.485 [2024-04-18 13:49:42.182959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.183218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.183257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.485 [2024-04-18 13:49:42.183272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.485 [2024-04-18 13:49:42.183487] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.485 [2024-04-18 13:49:42.183696] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.485 [2024-04-18 13:49:42.183717] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.485 [2024-04-18 13:49:42.183729] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.485 [2024-04-18 13:49:42.186659] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.485 [2024-04-18 13:49:42.195762] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.485 [2024-04-18 13:49:42.196205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.196447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.196490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.485 [2024-04-18 13:49:42.196503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.485 [2024-04-18 13:49:42.196691] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.485 [2024-04-18 13:49:42.196882] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.485 [2024-04-18 13:49:42.196900] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.485 [2024-04-18 13:49:42.196912] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.485 [2024-04-18 13:49:42.199999] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.485 [2024-04-18 13:49:42.209027] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.485 [2024-04-18 13:49:42.209484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.209775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.209799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.485 [2024-04-18 13:49:42.209814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.485 [2024-04-18 13:49:42.210002] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.485 [2024-04-18 13:49:42.210220] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.485 [2024-04-18 13:49:42.210257] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.485 [2024-04-18 13:49:42.210270] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.485 [2024-04-18 13:49:42.213221] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.485 [2024-04-18 13:49:42.222247] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.485 [2024-04-18 13:49:42.222664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.222853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.485 [2024-04-18 13:49:42.222877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.485 [2024-04-18 13:49:42.222891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.485 [2024-04-18 13:49:42.223079] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.485 [2024-04-18 13:49:42.223326] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.485 [2024-04-18 13:49:42.223349] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.486 [2024-04-18 13:49:42.223363] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.486 [2024-04-18 13:49:42.226310] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.486 [2024-04-18 13:49:42.235557] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.486 [2024-04-18 13:49:42.236014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.486 [2024-04-18 13:49:42.236301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.486 [2024-04-18 13:49:42.236327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.486 [2024-04-18 13:49:42.236342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.486 [2024-04-18 13:49:42.236569] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.486 [2024-04-18 13:49:42.236760] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.486 [2024-04-18 13:49:42.236780] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.486 [2024-04-18 13:49:42.236793] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.486 [2024-04-18 13:49:42.239725] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.486 [2024-04-18 13:49:42.248754] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.486 [2024-04-18 13:49:42.249195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.486 [2024-04-18 13:49:42.249474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.486 [2024-04-18 13:49:42.249497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.486 [2024-04-18 13:49:42.249511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.486 [2024-04-18 13:49:42.249698] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.486 [2024-04-18 13:49:42.249890] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.486 [2024-04-18 13:49:42.249910] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.486 [2024-04-18 13:49:42.249923] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.486 [2024-04-18 13:49:42.252894] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.486 [2024-04-18 13:49:42.261904] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.486 [2024-04-18 13:49:42.262376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.486 [2024-04-18 13:49:42.262669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.486 [2024-04-18 13:49:42.262693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.486 [2024-04-18 13:49:42.262707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.486 [2024-04-18 13:49:42.262896] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.486 [2024-04-18 13:49:42.263087] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.486 [2024-04-18 13:49:42.263107] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.486 [2024-04-18 13:49:42.263119] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.486 [2024-04-18 13:49:42.266152] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.486 [2024-04-18 13:49:42.275491] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.486 [2024-04-18 13:49:42.275966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.486 [2024-04-18 13:49:42.276254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.486 [2024-04-18 13:49:42.276281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.486 [2024-04-18 13:49:42.276297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.486 [2024-04-18 13:49:42.276516] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.486 [2024-04-18 13:49:42.276707] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.486 [2024-04-18 13:49:42.276727] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.486 [2024-04-18 13:49:42.276739] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.486 [2024-04-18 13:49:42.279708] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.486 [2024-04-18 13:49:42.289216] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.486 [2024-04-18 13:49:42.289716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.486 [2024-04-18 13:49:42.289976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.486 [2024-04-18 13:49:42.290004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.486 [2024-04-18 13:49:42.290020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.744 [2024-04-18 13:49:42.290250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.744 [2024-04-18 13:49:42.290495] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.744 [2024-04-18 13:49:42.290517] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.745 [2024-04-18 13:49:42.290531] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.745 [2024-04-18 13:49:42.293551] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.745 [2024-04-18 13:49:42.302462] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.745 [2024-04-18 13:49:42.302927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.303191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.303216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.745 [2024-04-18 13:49:42.303246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.745 [2024-04-18 13:49:42.303445] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.745 [2024-04-18 13:49:42.303671] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.745 [2024-04-18 13:49:42.303692] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.745 [2024-04-18 13:49:42.303705] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.745 [2024-04-18 13:49:42.306636] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.745 [2024-04-18 13:49:42.315690] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.745 [2024-04-18 13:49:42.316109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.316262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.316287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.745 [2024-04-18 13:49:42.316302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.745 [2024-04-18 13:49:42.316516] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.745 [2024-04-18 13:49:42.316725] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.745 [2024-04-18 13:49:42.316745] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.745 [2024-04-18 13:49:42.316758] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.745 [2024-04-18 13:49:42.319691] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.745 [2024-04-18 13:49:42.328873] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.745 [2024-04-18 13:49:42.329339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.329598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.329620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.745 [2024-04-18 13:49:42.329639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.745 [2024-04-18 13:49:42.329827] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.745 [2024-04-18 13:49:42.330020] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.745 [2024-04-18 13:49:42.330040] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.745 [2024-04-18 13:49:42.330053] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.745 [2024-04-18 13:49:42.333012] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.745 [2024-04-18 13:49:42.342161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.745 [2024-04-18 13:49:42.342667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.342930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.342953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.745 [2024-04-18 13:49:42.342967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.745 [2024-04-18 13:49:42.343155] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.745 [2024-04-18 13:49:42.343405] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.745 [2024-04-18 13:49:42.343427] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.745 [2024-04-18 13:49:42.343440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.745 [2024-04-18 13:49:42.346373] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.745 [2024-04-18 13:49:42.355332] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.745 [2024-04-18 13:49:42.355779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.356053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.356077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.745 [2024-04-18 13:49:42.356092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.745 [2024-04-18 13:49:42.356326] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.745 [2024-04-18 13:49:42.356545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.745 [2024-04-18 13:49:42.356566] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.745 [2024-04-18 13:49:42.356593] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.745 [2024-04-18 13:49:42.359672] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.745 [2024-04-18 13:49:42.368758] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.745 [2024-04-18 13:49:42.369139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.369325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.369351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.745 [2024-04-18 13:49:42.369367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.745 [2024-04-18 13:49:42.369600] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.745 [2024-04-18 13:49:42.369792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.745 [2024-04-18 13:49:42.369812] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.745 [2024-04-18 13:49:42.369824] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.745 [2024-04-18 13:49:42.372957] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.745 [2024-04-18 13:49:42.382344] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.745 [2024-04-18 13:49:42.382739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.382873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.382896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.745 [2024-04-18 13:49:42.382911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.745 [2024-04-18 13:49:42.383098] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.745 [2024-04-18 13:49:42.383323] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.745 [2024-04-18 13:49:42.383344] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.745 [2024-04-18 13:49:42.383357] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.745 [2024-04-18 13:49:42.386345] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.745 [2024-04-18 13:49:42.395908] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.745 [2024-04-18 13:49:42.396291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.396435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.396459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.745 [2024-04-18 13:49:42.396474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.745 [2024-04-18 13:49:42.396699] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.745 [2024-04-18 13:49:42.396929] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.745 [2024-04-18 13:49:42.396949] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.745 [2024-04-18 13:49:42.396963] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.745 [2024-04-18 13:49:42.400060] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.745 [2024-04-18 13:49:42.409125] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.745 [2024-04-18 13:49:42.409513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.409710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.745 [2024-04-18 13:49:42.409733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.745 [2024-04-18 13:49:42.409747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.745 [2024-04-18 13:49:42.409935] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.745 [2024-04-18 13:49:42.410131] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.745 [2024-04-18 13:49:42.410150] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.745 [2024-04-18 13:49:42.410163] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.745 [2024-04-18 13:49:42.413133] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.745 [2024-04-18 13:49:42.422386] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.746 [2024-04-18 13:49:42.422750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.422873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.422896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.746 [2024-04-18 13:49:42.422910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.746 [2024-04-18 13:49:42.423117] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.746 [2024-04-18 13:49:42.423341] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.746 [2024-04-18 13:49:42.423363] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.746 [2024-04-18 13:49:42.423377] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.746 [2024-04-18 13:49:42.426431] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.746 [2024-04-18 13:49:42.435744] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.746 [2024-04-18 13:49:42.436091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.436272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.436296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.746 [2024-04-18 13:49:42.436311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.746 [2024-04-18 13:49:42.436519] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.746 [2024-04-18 13:49:42.436711] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.746 [2024-04-18 13:49:42.436731] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.746 [2024-04-18 13:49:42.436744] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.746 [2024-04-18 13:49:42.439692] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.746 [2024-04-18 13:49:42.449049] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.746 [2024-04-18 13:49:42.449402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.449592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.449615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.746 [2024-04-18 13:49:42.449629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.746 [2024-04-18 13:49:42.449817] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.746 [2024-04-18 13:49:42.450009] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.746 [2024-04-18 13:49:42.450032] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.746 [2024-04-18 13:49:42.450045] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.746 [2024-04-18 13:49:42.453004] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.746 [2024-04-18 13:49:42.462240] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.746 [2024-04-18 13:49:42.462623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.462753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.462775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.746 [2024-04-18 13:49:42.462789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.746 [2024-04-18 13:49:42.462977] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.746 [2024-04-18 13:49:42.463193] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.746 [2024-04-18 13:49:42.463214] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.746 [2024-04-18 13:49:42.463227] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.746 [2024-04-18 13:49:42.466222] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.746 [2024-04-18 13:49:42.475499] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.746 [2024-04-18 13:49:42.475890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.476032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.476055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.746 [2024-04-18 13:49:42.476069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.746 [2024-04-18 13:49:42.476284] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.746 [2024-04-18 13:49:42.476496] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.746 [2024-04-18 13:49:42.476516] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.746 [2024-04-18 13:49:42.476528] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.746 [2024-04-18 13:49:42.479470] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.746 [2024-04-18 13:49:42.488674] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.746 [2024-04-18 13:49:42.489140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.489306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.489330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.746 [2024-04-18 13:49:42.489345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.746 [2024-04-18 13:49:42.489552] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.746 [2024-04-18 13:49:42.489744] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.746 [2024-04-18 13:49:42.489763] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.746 [2024-04-18 13:49:42.489780] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.746 [2024-04-18 13:49:42.492724] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.746 [2024-04-18 13:49:42.502015] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.746 [2024-04-18 13:49:42.502387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.502577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.502600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.746 [2024-04-18 13:49:42.502615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.746 [2024-04-18 13:49:42.502803] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.746 [2024-04-18 13:49:42.503005] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.746 [2024-04-18 13:49:42.503026] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.746 [2024-04-18 13:49:42.503038] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.746 [2024-04-18 13:49:42.505980] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.746 [2024-04-18 13:49:42.515298] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.746 [2024-04-18 13:49:42.515698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.515863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.515887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.746 [2024-04-18 13:49:42.515902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.746 [2024-04-18 13:49:42.516106] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.746 [2024-04-18 13:49:42.516329] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.746 [2024-04-18 13:49:42.516349] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.746 [2024-04-18 13:49:42.516362] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.746 [2024-04-18 13:49:42.519652] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.746 [2024-04-18 13:49:42.528596] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.746 [2024-04-18 13:49:42.528981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.529187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.529212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.746 [2024-04-18 13:49:42.529227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.746 [2024-04-18 13:49:42.529426] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.746 [2024-04-18 13:49:42.529655] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.746 [2024-04-18 13:49:42.529675] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.746 [2024-04-18 13:49:42.529688] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.746 [2024-04-18 13:49:42.532658] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:39.746 [2024-04-18 13:49:42.541818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.746 [2024-04-18 13:49:42.542229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.542468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.746 [2024-04-18 13:49:42.542491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:39.747 [2024-04-18 13:49:42.542505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:39.747 [2024-04-18 13:49:42.542693] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:39.747 [2024-04-18 13:49:42.542886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.747 [2024-04-18 13:49:42.542907] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:39.747 [2024-04-18 13:49:42.542919] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.747 [2024-04-18 13:49:42.545893] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.006 [2024-04-18 13:49:42.555214] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.006 [2024-04-18 13:49:42.555696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-18 13:49:42.555947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-18 13:49:42.555973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.006 [2024-04-18 13:49:42.556003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.006 [2024-04-18 13:49:42.556251] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.006 [2024-04-18 13:49:42.556486] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.006 [2024-04-18 13:49:42.556508] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.006 [2024-04-18 13:49:42.556522] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.006 [2024-04-18 13:49:42.559557] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.006 [2024-04-18 13:49:42.568465] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.006 [2024-04-18 13:49:42.568908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-18 13:49:42.569109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-18 13:49:42.569132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.006 [2024-04-18 13:49:42.569146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.006 [2024-04-18 13:49:42.569379] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.006 [2024-04-18 13:49:42.569608] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.006 [2024-04-18 13:49:42.569628] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.006 [2024-04-18 13:49:42.569640] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.006 [2024-04-18 13:49:42.572570] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.006 [2024-04-18 13:49:42.581736] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.006 [2024-04-18 13:49:42.582207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-18 13:49:42.582397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-18 13:49:42.582421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.006 [2024-04-18 13:49:42.582436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.006 [2024-04-18 13:49:42.582642] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.006 [2024-04-18 13:49:42.582834] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.006 [2024-04-18 13:49:42.582854] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.006 [2024-04-18 13:49:42.582867] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.006 [2024-04-18 13:49:42.585835] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.006 [2024-04-18 13:49:42.595070] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.006 [2024-04-18 13:49:42.595421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-18 13:49:42.595624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-18 13:49:42.595648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.006 [2024-04-18 13:49:42.595662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.006 [2024-04-18 13:49:42.595850] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.006 [2024-04-18 13:49:42.596042] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.006 [2024-04-18 13:49:42.596061] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.006 [2024-04-18 13:49:42.596074] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.006 [2024-04-18 13:49:42.599075] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.006 [2024-04-18 13:49:42.608341] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.006 [2024-04-18 13:49:42.608718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-18 13:49:42.608869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-18 13:49:42.608892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.006 [2024-04-18 13:49:42.608907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.006 [2024-04-18 13:49:42.609095] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.006 [2024-04-18 13:49:42.609314] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.006 [2024-04-18 13:49:42.609335] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.006 [2024-04-18 13:49:42.609348] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.006 [2024-04-18 13:49:42.612287] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.006 [2024-04-18 13:49:42.621509] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.006 [2024-04-18 13:49:42.621868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-18 13:49:42.622025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-18 13:49:42.622048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.006 [2024-04-18 13:49:42.622062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.006 [2024-04-18 13:49:42.622279] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.006 [2024-04-18 13:49:42.622499] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.006 [2024-04-18 13:49:42.622518] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.006 [2024-04-18 13:49:42.622545] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.006 [2024-04-18 13:49:42.625467] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.006 [2024-04-18 13:49:42.634745] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.006 [2024-04-18 13:49:42.635181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-18 13:49:42.635349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.006 [2024-04-18 13:49:42.635383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.006 [2024-04-18 13:49:42.635398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.006 [2024-04-18 13:49:42.635605] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.006 [2024-04-18 13:49:42.635796] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.007 [2024-04-18 13:49:42.635815] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.007 [2024-04-18 13:49:42.635827] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.007 [2024-04-18 13:49:42.638770] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.007 [2024-04-18 13:49:42.647959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.007 [2024-04-18 13:49:42.648358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.648632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.648656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.007 [2024-04-18 13:49:42.648670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.007 [2024-04-18 13:49:42.648863] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.007 [2024-04-18 13:49:42.649072] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.007 [2024-04-18 13:49:42.649092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.007 [2024-04-18 13:49:42.649105] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.007 [2024-04-18 13:49:42.652058] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.007 [2024-04-18 13:49:42.661257] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.007 [2024-04-18 13:49:42.661659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.661798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.661825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.007 [2024-04-18 13:49:42.661840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.007 [2024-04-18 13:49:42.662028] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.007 [2024-04-18 13:49:42.662263] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.007 [2024-04-18 13:49:42.662284] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.007 [2024-04-18 13:49:42.662297] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.007 [2024-04-18 13:49:42.665248] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.007 [2024-04-18 13:49:42.675087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.007 [2024-04-18 13:49:42.675538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.675685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.675714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.007 [2024-04-18 13:49:42.675731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.007 [2024-04-18 13:49:42.675968] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.007 [2024-04-18 13:49:42.676223] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.007 [2024-04-18 13:49:42.676248] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.007 [2024-04-18 13:49:42.676263] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.007 [2024-04-18 13:49:42.679809] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.007 [2024-04-18 13:49:42.689015] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.007 [2024-04-18 13:49:42.689442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.689612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.689640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.007 [2024-04-18 13:49:42.689657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.007 [2024-04-18 13:49:42.689893] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.007 [2024-04-18 13:49:42.690134] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.007 [2024-04-18 13:49:42.690158] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.007 [2024-04-18 13:49:42.690173] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.007 [2024-04-18 13:49:42.693732] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.007 [2024-04-18 13:49:42.702940] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.007 [2024-04-18 13:49:42.703373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.703530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.703558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.007 [2024-04-18 13:49:42.703584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.007 [2024-04-18 13:49:42.703821] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.007 [2024-04-18 13:49:42.704062] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.007 [2024-04-18 13:49:42.704086] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.007 [2024-04-18 13:49:42.704102] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.007 [2024-04-18 13:49:42.707657] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.007 [2024-04-18 13:49:42.716874] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.007 [2024-04-18 13:49:42.717271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.717434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.717463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.007 [2024-04-18 13:49:42.717480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.007 [2024-04-18 13:49:42.717716] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.007 [2024-04-18 13:49:42.717957] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.007 [2024-04-18 13:49:42.717981] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.007 [2024-04-18 13:49:42.717997] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.007 [2024-04-18 13:49:42.721558] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.007 [2024-04-18 13:49:42.730778] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.007 [2024-04-18 13:49:42.731242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.731453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.731482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.007 [2024-04-18 13:49:42.731499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.007 [2024-04-18 13:49:42.731736] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.007 [2024-04-18 13:49:42.731977] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.007 [2024-04-18 13:49:42.732000] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.007 [2024-04-18 13:49:42.732015] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.007 [2024-04-18 13:49:42.735571] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.007 [2024-04-18 13:49:42.744791] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.007 [2024-04-18 13:49:42.745237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.745515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.745566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.007 [2024-04-18 13:49:42.745584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.007 [2024-04-18 13:49:42.745825] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.007 [2024-04-18 13:49:42.746065] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.007 [2024-04-18 13:49:42.746090] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.007 [2024-04-18 13:49:42.746106] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.007 [2024-04-18 13:49:42.749663] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.007 [2024-04-18 13:49:42.758663] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.007 [2024-04-18 13:49:42.759169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.759382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.007 [2024-04-18 13:49:42.759430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.008 [2024-04-18 13:49:42.759448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.008 [2024-04-18 13:49:42.759685] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.008 [2024-04-18 13:49:42.759926] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.008 [2024-04-18 13:49:42.759951] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.008 [2024-04-18 13:49:42.759967] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.008 [2024-04-18 13:49:42.763529] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.008 [2024-04-18 13:49:42.772395] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.008 [2024-04-18 13:49:42.772893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-18 13:49:42.773119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-18 13:49:42.773145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.008 [2024-04-18 13:49:42.773184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.008 [2024-04-18 13:49:42.773436] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.008 [2024-04-18 13:49:42.773698] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.008 [2024-04-18 13:49:42.773720] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.008 [2024-04-18 13:49:42.773734] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.008 [2024-04-18 13:49:42.777233] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.008 [2024-04-18 13:49:42.786234] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.008 [2024-04-18 13:49:42.786733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-18 13:49:42.786973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-18 13:49:42.787023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.008 [2024-04-18 13:49:42.787040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.008 [2024-04-18 13:49:42.787289] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.008 [2024-04-18 13:49:42.787538] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.008 [2024-04-18 13:49:42.787562] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.008 [2024-04-18 13:49:42.787579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.008 [2024-04-18 13:49:42.791130] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.008 [2024-04-18 13:49:42.800122] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.008 [2024-04-18 13:49:42.800589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-18 13:49:42.800843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.008 [2024-04-18 13:49:42.800895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.008 [2024-04-18 13:49:42.800912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.008 [2024-04-18 13:49:42.801148] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.008 [2024-04-18 13:49:42.801402] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.008 [2024-04-18 13:49:42.801427] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.008 [2024-04-18 13:49:42.801443] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.008 [2024-04-18 13:49:42.804988] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.268 [2024-04-18 13:49:42.813993] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.268 [2024-04-18 13:49:42.814457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.814661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.814712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.268 [2024-04-18 13:49:42.814730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.268 [2024-04-18 13:49:42.814967] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.268 [2024-04-18 13:49:42.815222] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.268 [2024-04-18 13:49:42.815247] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.268 [2024-04-18 13:49:42.815264] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.268 [2024-04-18 13:49:42.818808] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.268 [2024-04-18 13:49:42.827797] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.268 [2024-04-18 13:49:42.828267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.828497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.828547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.268 [2024-04-18 13:49:42.828565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.268 [2024-04-18 13:49:42.828801] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.268 [2024-04-18 13:49:42.829043] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.268 [2024-04-18 13:49:42.829073] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.268 [2024-04-18 13:49:42.829090] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.268 [2024-04-18 13:49:42.832643] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.268 [2024-04-18 13:49:42.841643] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.268 [2024-04-18 13:49:42.842127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.842337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.842366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.268 [2024-04-18 13:49:42.842384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.268 [2024-04-18 13:49:42.842621] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.268 [2024-04-18 13:49:42.842863] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.268 [2024-04-18 13:49:42.842887] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.268 [2024-04-18 13:49:42.842903] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.268 [2024-04-18 13:49:42.846457] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.268 [2024-04-18 13:49:42.855452] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.268 [2024-04-18 13:49:42.855937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.856189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.856218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.268 [2024-04-18 13:49:42.856236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.268 [2024-04-18 13:49:42.856472] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.268 [2024-04-18 13:49:42.856714] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.268 [2024-04-18 13:49:42.856739] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.268 [2024-04-18 13:49:42.856755] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.268 [2024-04-18 13:49:42.860308] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.268 [2024-04-18 13:49:42.869318] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.268 [2024-04-18 13:49:42.869791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.870036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.870093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.268 [2024-04-18 13:49:42.870111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.268 [2024-04-18 13:49:42.870363] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.268 [2024-04-18 13:49:42.870603] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.268 [2024-04-18 13:49:42.870631] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.268 [2024-04-18 13:49:42.870652] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.268 [2024-04-18 13:49:42.874208] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.268 [2024-04-18 13:49:42.883197] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.268 [2024-04-18 13:49:42.883798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.884076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.884129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.268 [2024-04-18 13:49:42.884148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.268 [2024-04-18 13:49:42.884406] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.268 [2024-04-18 13:49:42.884649] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.268 [2024-04-18 13:49:42.884674] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.268 [2024-04-18 13:49:42.884690] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.268 [2024-04-18 13:49:42.888244] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.268 [2024-04-18 13:49:42.897029] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.268 [2024-04-18 13:49:42.897511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.897721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.897770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.268 [2024-04-18 13:49:42.897788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.268 [2024-04-18 13:49:42.898025] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.268 [2024-04-18 13:49:42.898282] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.268 [2024-04-18 13:49:42.898306] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.268 [2024-04-18 13:49:42.898322] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.268 [2024-04-18 13:49:42.901865] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.268 [2024-04-18 13:49:42.910859] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.268 [2024-04-18 13:49:42.911332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.911529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.911583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.268 [2024-04-18 13:49:42.911601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.268 [2024-04-18 13:49:42.911842] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.268 [2024-04-18 13:49:42.912084] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.268 [2024-04-18 13:49:42.912108] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.268 [2024-04-18 13:49:42.912125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.268 [2024-04-18 13:49:42.915690] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.268 [2024-04-18 13:49:42.924685] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.268 [2024-04-18 13:49:42.925160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.925416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.268 [2024-04-18 13:49:42.925446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.268 [2024-04-18 13:49:42.925464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.269 [2024-04-18 13:49:42.925701] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.269 [2024-04-18 13:49:42.925943] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.269 [2024-04-18 13:49:42.925968] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.269 [2024-04-18 13:49:42.925984] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.269 [2024-04-18 13:49:42.929539] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.269 [2024-04-18 13:49:42.938542] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.269 [2024-04-18 13:49:42.939032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.269 [2024-04-18 13:49:42.939281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.269 [2024-04-18 13:49:42.939333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.269 [2024-04-18 13:49:42.939351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.269 [2024-04-18 13:49:42.939588] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.269 [2024-04-18 13:49:42.939830] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.269 [2024-04-18 13:49:42.939855] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.269 [2024-04-18 13:49:42.939870] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.269 [2024-04-18 13:49:42.943427] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.269 [2024-04-18 13:49:42.952426] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.269 [2024-04-18 13:49:42.952903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.269 [2024-04-18 13:49:42.953104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.269 [2024-04-18 13:49:42.953133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.269 [2024-04-18 13:49:42.953151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.269 [2024-04-18 13:49:42.953398] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.269 [2024-04-18 13:49:42.953640] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.269 [2024-04-18 13:49:42.953664] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.269 [2024-04-18 13:49:42.953679] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.269 [2024-04-18 13:49:42.957233] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.269 [2024-04-18 13:49:42.966389] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.269 [2024-04-18 13:49:42.966845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.269 [2024-04-18 13:49:42.966986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.269 [2024-04-18 13:49:42.967016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.269 [2024-04-18 13:49:42.967034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.269 [2024-04-18 13:49:42.967284] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.269 [2024-04-18 13:49:42.967528] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.269 [2024-04-18 13:49:42.967552] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.269 [2024-04-18 13:49:42.967568] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.269 [2024-04-18 13:49:42.971113] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.269 [2024-04-18 13:49:42.980325] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.269 [2024-04-18 13:49:42.980806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.269 [2024-04-18 13:49:42.981070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.269 [2024-04-18 13:49:42.981117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.269 [2024-04-18 13:49:42.981135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.269 [2024-04-18 13:49:42.981384] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.269 [2024-04-18 13:49:42.981625] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.269 [2024-04-18 13:49:42.981650] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.269 [2024-04-18 13:49:42.981666] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.269 [2024-04-18 13:49:42.985216] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2678933 Killed "${NVMF_APP[@]}" "$@" 00:20:40.269 13:49:42 -- host/bdevperf.sh@36 -- # tgt_init 00:20:40.269 13:49:42 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:40.269 13:49:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:40.269 13:49:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:40.269 13:49:42 -- common/autotest_common.sh@10 -- # set +x 00:20:40.269 [2024-04-18 13:49:42.994210] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.269 [2024-04-18 13:49:42.994681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.269 [2024-04-18 13:49:42.994880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.269 [2024-04-18 13:49:42.994931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.269 [2024-04-18 13:49:42.994949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.269 [2024-04-18 13:49:42.995195] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.269 [2024-04-18 13:49:42.995449] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.269 [2024-04-18 13:49:42.995473] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.269 [2024-04-18 13:49:42.995495] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.269 13:49:42 -- nvmf/common.sh@470 -- # nvmfpid=2680022 00:20:40.269 13:49:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:40.269 13:49:42 -- nvmf/common.sh@471 -- # waitforlisten 2680022 00:20:40.269 13:49:42 -- common/autotest_common.sh@817 -- # '[' -z 2680022 ']' 00:20:40.269 13:49:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.269 13:49:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:40.269 13:49:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.269 13:49:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:40.269 13:49:42 -- common/autotest_common.sh@10 -- # set +x 00:20:40.269 [2024-04-18 13:49:42.999035] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.269 [2024-04-18 13:49:43.008055] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.269 [2024-04-18 13:49:43.008509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.269 [2024-04-18 13:49:43.008679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.269 [2024-04-18 13:49:43.008724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.269 [2024-04-18 13:49:43.008742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.269 [2024-04-18 13:49:43.008979] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.269 [2024-04-18 13:49:43.009232] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.269 [2024-04-18 13:49:43.009257] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.269 [2024-04-18 13:49:43.009273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.269 [2024-04-18 13:49:43.012818] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.269 [2024-04-18 13:49:43.022027] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.269 [2024-04-18 13:49:43.022428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.269 [2024-04-18 13:49:43.022616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.269 [2024-04-18 13:49:43.022645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.269 [2024-04-18 13:49:43.022663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.269 [2024-04-18 13:49:43.022900] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.269 [2024-04-18 13:49:43.023140] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.269 [2024-04-18 13:49:43.023164] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.269 [2024-04-18 13:49:43.023189] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.269 [2024-04-18 13:49:43.026756] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.269 [2024-04-18 13:49:43.035589] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.269 [2024-04-18 13:49:43.035934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.269 [2024-04-18 13:49:43.036114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.269 [2024-04-18 13:49:43.036138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.269 [2024-04-18 13:49:43.036173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.269 [2024-04-18 13:49:43.036399] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.269 [2024-04-18 13:49:43.036618] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.270 [2024-04-18 13:49:43.036638] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.270 [2024-04-18 13:49:43.036651] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.270 [2024-04-18 13:49:43.039694] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.270 [2024-04-18 13:49:43.043110] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:40.270 [2024-04-18 13:49:43.043190] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.270 [2024-04-18 13:49:43.048916] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.270 [2024-04-18 13:49:43.049265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.270 [2024-04-18 13:49:43.049415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.270 [2024-04-18 13:49:43.049441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.270 [2024-04-18 13:49:43.049456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.270 [2024-04-18 13:49:43.049678] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.270 [2024-04-18 13:49:43.049870] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.270 [2024-04-18 13:49:43.049888] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.270 [2024-04-18 13:49:43.049901] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.270 [2024-04-18 13:49:43.052950] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.270 [2024-04-18 13:49:43.062214] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.270 [2024-04-18 13:49:43.062597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.270 [2024-04-18 13:49:43.062746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.270 [2024-04-18 13:49:43.062769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.270 [2024-04-18 13:49:43.062783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.270 [2024-04-18 13:49:43.062971] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.270 [2024-04-18 13:49:43.063187] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.270 [2024-04-18 13:49:43.063208] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.270 [2024-04-18 13:49:43.063221] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.270 [2024-04-18 13:49:43.066225] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.529 [2024-04-18 13:49:43.075672] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.529 [2024-04-18 13:49:43.076003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.529 [2024-04-18 13:49:43.076169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.529 [2024-04-18 13:49:43.076200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.529 [2024-04-18 13:49:43.076216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.529 [2024-04-18 13:49:43.076415] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.529 [2024-04-18 13:49:43.076640] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.529 [2024-04-18 13:49:43.076659] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.529 [2024-04-18 13:49:43.076672] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.529 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.529 [2024-04-18 13:49:43.079884] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.529 [2024-04-18 13:49:43.089597] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.529 [2024-04-18 13:49:43.089994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.529 [2024-04-18 13:49:43.090187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.529 [2024-04-18 13:49:43.090216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.529 [2024-04-18 13:49:43.090248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.529 [2024-04-18 13:49:43.090468] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.529 [2024-04-18 13:49:43.090715] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.529 [2024-04-18 13:49:43.090739] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.529 [2024-04-18 13:49:43.090755] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.529 [2024-04-18 13:49:43.094332] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.529 [2024-04-18 13:49:43.103740] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.529 [2024-04-18 13:49:43.104162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.529 [2024-04-18 13:49:43.104334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.529 [2024-04-18 13:49:43.104358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.529 [2024-04-18 13:49:43.104373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.529 [2024-04-18 13:49:43.104617] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.529 [2024-04-18 13:49:43.104859] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.529 [2024-04-18 13:49:43.104882] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.529 [2024-04-18 13:49:43.104898] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.529 [2024-04-18 13:49:43.108416] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.529 [2024-04-18 13:49:43.114911] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:40.529 [2024-04-18 13:49:43.117651] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.529 [2024-04-18 13:49:43.118043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.529 [2024-04-18 13:49:43.118205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.529 [2024-04-18 13:49:43.118234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.529 [2024-04-18 13:49:43.118264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.529 [2024-04-18 13:49:43.118459] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.529 [2024-04-18 13:49:43.118717] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.529 [2024-04-18 13:49:43.118742] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.529 [2024-04-18 13:49:43.118758] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.529 [2024-04-18 13:49:43.122286] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.529 [2024-04-18 13:49:43.131517] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.529 [2024-04-18 13:49:43.132029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.529 [2024-04-18 13:49:43.132238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.529 [2024-04-18 13:49:43.132264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.529 [2024-04-18 13:49:43.132284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.529 [2024-04-18 13:49:43.132489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.530 [2024-04-18 13:49:43.132751] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.530 [2024-04-18 13:49:43.132776] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.530 [2024-04-18 13:49:43.132796] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.530 [2024-04-18 13:49:43.136305] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.530 [2024-04-18 13:49:43.145304] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.530 [2024-04-18 13:49:43.145679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.145850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.145880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.530 [2024-04-18 13:49:43.145898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.530 [2024-04-18 13:49:43.146136] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.530 [2024-04-18 13:49:43.146370] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.530 [2024-04-18 13:49:43.146392] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.530 [2024-04-18 13:49:43.146405] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.530 [2024-04-18 13:49:43.149893] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.530 [2024-04-18 13:49:43.159162] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.530 [2024-04-18 13:49:43.159580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.159750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.159792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.530 [2024-04-18 13:49:43.159812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.530 [2024-04-18 13:49:43.160049] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.530 [2024-04-18 13:49:43.160299] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.530 [2024-04-18 13:49:43.160321] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.530 [2024-04-18 13:49:43.160334] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.530 [2024-04-18 13:49:43.163813] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.530 [2024-04-18 13:49:43.173071] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.530 [2024-04-18 13:49:43.173513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.173705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.173734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.530 [2024-04-18 13:49:43.173752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.530 [2024-04-18 13:49:43.173989] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.530 [2024-04-18 13:49:43.174249] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.530 [2024-04-18 13:49:43.174270] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.530 [2024-04-18 13:49:43.174283] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.530 [2024-04-18 13:49:43.177747] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.530 [2024-04-18 13:49:43.186999] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.530 [2024-04-18 13:49:43.187562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.187763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.187792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.530 [2024-04-18 13:49:43.187815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.530 [2024-04-18 13:49:43.188063] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.530 [2024-04-18 13:49:43.188315] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.530 [2024-04-18 13:49:43.188337] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.530 [2024-04-18 13:49:43.188354] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.530 [2024-04-18 13:49:43.191821] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.530 [2024-04-18 13:49:43.200937] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.530 [2024-04-18 13:49:43.201375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.201505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.201533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.530 [2024-04-18 13:49:43.201563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.530 [2024-04-18 13:49:43.201802] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.530 [2024-04-18 13:49:43.202044] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.530 [2024-04-18 13:49:43.202069] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.530 [2024-04-18 13:49:43.202085] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.530 [2024-04-18 13:49:43.205560] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.530 [2024-04-18 13:49:43.214817] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.530 [2024-04-18 13:49:43.215238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.215390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.215415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.530 [2024-04-18 13:49:43.215430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.530 [2024-04-18 13:49:43.215679] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.530 [2024-04-18 13:49:43.215921] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.530 [2024-04-18 13:49:43.215946] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.530 [2024-04-18 13:49:43.215964] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.530 [2024-04-18 13:49:43.219434] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.530 [2024-04-18 13:49:43.228677] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.530 [2024-04-18 13:49:43.229074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.229260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.229286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.530 [2024-04-18 13:49:43.229302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.530 [2024-04-18 13:49:43.229530] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.530 [2024-04-18 13:49:43.229771] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.530 [2024-04-18 13:49:43.229796] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.530 [2024-04-18 13:49:43.229811] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.530 [2024-04-18 13:49:43.230648] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.530 [2024-04-18 13:49:43.230680] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.530 [2024-04-18 13:49:43.230694] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.530 [2024-04-18 13:49:43.230719] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.530 [2024-04-18 13:49:43.230729] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.530 [2024-04-18 13:49:43.230814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.530 [2024-04-18 13:49:43.230874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.530 [2024-04-18 13:49:43.230877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.530 [2024-04-18 13:49:43.233034] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.530 [2024-04-18 13:49:43.242071] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.530 [2024-04-18 13:49:43.242592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.242783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.242808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.530 [2024-04-18 13:49:43.242828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.530 [2024-04-18 13:49:43.243039] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.530 [2024-04-18 13:49:43.243278] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.530 [2024-04-18 13:49:43.243302] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.530 [2024-04-18 13:49:43.243320] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.530 [2024-04-18 13:49:43.246440] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.530 [2024-04-18 13:49:43.255667] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.530 [2024-04-18 13:49:43.256152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.256346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.530 [2024-04-18 13:49:43.256373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.531 [2024-04-18 13:49:43.256393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.531 [2024-04-18 13:49:43.256621] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.531 [2024-04-18 13:49:43.256832] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.531 [2024-04-18 13:49:43.256853] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.531 [2024-04-18 13:49:43.256870] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.531 [2024-04-18 13:49:43.259988] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.531 [2024-04-18 13:49:43.269152] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.531 [2024-04-18 13:49:43.269680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.531 [2024-04-18 13:49:43.269846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.531 [2024-04-18 13:49:43.269871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.531 [2024-04-18 13:49:43.269891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.531 [2024-04-18 13:49:43.270101] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.531 [2024-04-18 13:49:43.270339] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.531 [2024-04-18 13:49:43.270362] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.531 [2024-04-18 13:49:43.270380] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.531 [2024-04-18 13:49:43.273523] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.531 [2024-04-18 13:49:43.282849] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.531 [2024-04-18 13:49:43.283412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.531 [2024-04-18 13:49:43.283668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.531 [2024-04-18 13:49:43.283695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.531 [2024-04-18 13:49:43.283716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.531 [2024-04-18 13:49:43.283927] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.531 [2024-04-18 13:49:43.284140] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.531 [2024-04-18 13:49:43.284186] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.531 [2024-04-18 13:49:43.284206] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.531 [2024-04-18 13:49:43.287442] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.531 [2024-04-18 13:49:43.296551] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.531 [2024-04-18 13:49:43.297093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.531 [2024-04-18 13:49:43.297331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.531 [2024-04-18 13:49:43.297360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.531 [2024-04-18 13:49:43.297380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.531 [2024-04-18 13:49:43.297629] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.531 [2024-04-18 13:49:43.297841] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.531 [2024-04-18 13:49:43.297864] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.531 [2024-04-18 13:49:43.297882] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.531 [2024-04-18 13:49:43.301092] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.531 [2024-04-18 13:49:43.310007] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.531 [2024-04-18 13:49:43.310583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.531 [2024-04-18 13:49:43.310737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.531 [2024-04-18 13:49:43.310764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.531 [2024-04-18 13:49:43.310784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.531 [2024-04-18 13:49:43.310996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.531 [2024-04-18 13:49:43.311235] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.531 [2024-04-18 13:49:43.311259] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.531 [2024-04-18 13:49:43.311278] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.531 [2024-04-18 13:49:43.314393] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.531 [2024-04-18 13:49:43.323424] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.531 [2024-04-18 13:49:43.323883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.531 [2024-04-18 13:49:43.324127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.531 [2024-04-18 13:49:43.324152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.531 [2024-04-18 13:49:43.324167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.531 [2024-04-18 13:49:43.324395] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.531 [2024-04-18 13:49:43.324616] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.531 [2024-04-18 13:49:43.324639] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.531 [2024-04-18 13:49:43.324652] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.531 [2024-04-18 13:49:43.327758] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.790 [2024-04-18 13:49:43.337087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.790 [2024-04-18 13:49:43.337551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.790 [2024-04-18 13:49:43.337766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.790 [2024-04-18 13:49:43.337792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.790 [2024-04-18 13:49:43.337808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.790 [2024-04-18 13:49:43.338015] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.790 [2024-04-18 13:49:43.338251] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.790 [2024-04-18 13:49:43.338275] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.790 [2024-04-18 13:49:43.338290] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.790 [2024-04-18 13:49:43.341527] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.790 13:49:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:40.790 13:49:43 -- common/autotest_common.sh@850 -- # return 0 00:20:40.790 13:49:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:40.790 13:49:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:40.790 13:49:43 -- common/autotest_common.sh@10 -- # set +x 00:20:40.790 [2024-04-18 13:49:43.350663] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.790 [2024-04-18 13:49:43.351015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.790 [2024-04-18 13:49:43.351187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.790 [2024-04-18 13:49:43.351215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.790 [2024-04-18 13:49:43.351242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.790 [2024-04-18 13:49:43.351458] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.790 [2024-04-18 13:49:43.351694] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.790 [2024-04-18 13:49:43.351715] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.790 [2024-04-18 13:49:43.351729] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.790 [2024-04-18 13:49:43.354915] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.790 [2024-04-18 13:49:43.364134] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.790 [2024-04-18 13:49:43.364591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.790 [2024-04-18 13:49:43.364792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.790 [2024-04-18 13:49:43.364817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.790 [2024-04-18 13:49:43.364832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.790 [2024-04-18 13:49:43.365032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.790 [2024-04-18 13:49:43.365268] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.790 [2024-04-18 13:49:43.365291] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.790 [2024-04-18 13:49:43.365305] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.790 [2024-04-18 13:49:43.368617] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.790 13:49:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.790 13:49:43 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:40.790 13:49:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.790 13:49:43 -- common/autotest_common.sh@10 -- # set +x 00:20:40.790 [2024-04-18 13:49:43.377691] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.790 [2024-04-18 13:49:43.378191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.790 [2024-04-18 13:49:43.378359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.790 [2024-04-18 13:49:43.378386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.790 [2024-04-18 13:49:43.378402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.790 [2024-04-18 13:49:43.378636] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.790 [2024-04-18 13:49:43.378842] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.790 [2024-04-18 13:49:43.378863] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.790 [2024-04-18 13:49:43.378877] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.790 [2024-04-18 13:49:43.379386] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.790 [2024-04-18 13:49:43.382046] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.790 13:49:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.790 13:49:43 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:40.791 13:49:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.791 13:49:43 -- common/autotest_common.sh@10 -- # set +x 00:20:40.791 [2024-04-18 13:49:43.391296] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.791 [2024-04-18 13:49:43.391764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.791 [2024-04-18 13:49:43.391941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.791 [2024-04-18 13:49:43.391965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.791 [2024-04-18 13:49:43.391979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.791 [2024-04-18 13:49:43.392202] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.791 [2024-04-18 13:49:43.392447] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.791 [2024-04-18 13:49:43.392488] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.791 [2024-04-18 13:49:43.392502] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.791 [2024-04-18 13:49:43.395596] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.791 [2024-04-18 13:49:43.404786] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.791 [2024-04-18 13:49:43.405189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.791 [2024-04-18 13:49:43.405359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.791 [2024-04-18 13:49:43.405387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.791 [2024-04-18 13:49:43.405403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.791 [2024-04-18 13:49:43.405640] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.791 [2024-04-18 13:49:43.405845] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.791 [2024-04-18 13:49:43.405866] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.791 [2024-04-18 13:49:43.405880] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.791 [2024-04-18 13:49:43.409080] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.791 [2024-04-18 13:49:43.418310] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.791 [2024-04-18 13:49:43.418916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.791 [2024-04-18 13:49:43.419208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.791 [2024-04-18 13:49:43.419236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.791 [2024-04-18 13:49:43.419256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.791 [2024-04-18 13:49:43.419500] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.791 [2024-04-18 13:49:43.419714] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.791 [2024-04-18 13:49:43.419736] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.791 [2024-04-18 13:49:43.419755] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.791 [2024-04-18 13:49:43.422872] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.791 Malloc0 00:20:40.791 13:49:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.791 13:49:43 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:40.791 13:49:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.791 13:49:43 -- common/autotest_common.sh@10 -- # set +x 00:20:40.791 [2024-04-18 13:49:43.431917] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.791 [2024-04-18 13:49:43.432393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.791 [2024-04-18 13:49:43.432636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.791 [2024-04-18 13:49:43.432661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.791 [2024-04-18 13:49:43.432686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.791 [2024-04-18 13:49:43.432888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.791 [2024-04-18 13:49:43.433094] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.791 [2024-04-18 13:49:43.433115] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.791 [2024-04-18 13:49:43.433129] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.791 13:49:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.791 13:49:43 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:40.791 13:49:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.791 13:49:43 -- common/autotest_common.sh@10 -- # set +x 00:20:40.791 [2024-04-18 13:49:43.436412] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.791 13:49:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.791 13:49:43 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:40.791 13:49:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.791 13:49:43 -- common/autotest_common.sh@10 -- # set +x 00:20:40.791 [2024-04-18 13:49:43.445510] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.791 [2024-04-18 13:49:43.445959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.791 [2024-04-18 13:49:43.446143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.791 [2024-04-18 13:49:43.446190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x254c1f0 with addr=10.0.0.2, port=4420 00:20:40.791 [2024-04-18 13:49:43.446208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c1f0 is same with the state(5) to be set 00:20:40.791 [2024-04-18 13:49:43.446421] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c1f0 (9): Bad file descriptor 00:20:40.791 [2024-04-18 13:49:43.446657] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.791 [2024-04-18 13:49:43.446679] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:40.791 [2024-04-18 13:49:43.446692] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.791 [2024-04-18 13:49:43.446743] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.791 [2024-04-18 13:49:43.449878] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:40.791 13:49:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.791 13:49:43 -- host/bdevperf.sh@38 -- # wait 2679352 00:20:40.791 [2024-04-18 13:49:43.459056] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:40.791 [2024-04-18 13:49:43.494058] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:50.766 00:20:50.766 Latency(us) 00:20:50.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.766 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:50.766 Verification LBA range: start 0x0 length 0x4000 00:20:50.766 Nvme1n1 : 15.01 6615.95 25.84 8862.38 0.00 8245.98 813.13 23204.60 00:20:50.766 =================================================================================================================== 00:20:50.766 Total : 6615.95 25.84 8862.38 0.00 8245.98 813.13 23204.60 00:20:50.766 13:49:52 -- host/bdevperf.sh@39 -- # sync 00:20:50.766 13:49:52 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:50.766 13:49:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.766 13:49:52 -- common/autotest_common.sh@10 -- # set +x 00:20:50.766 13:49:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.766 13:49:52 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:20:50.766 13:49:52 -- host/bdevperf.sh@44 -- # nvmftestfini 00:20:50.766 13:49:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:50.766 13:49:52 -- nvmf/common.sh@117 -- # sync 00:20:50.766 13:49:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:50.766 13:49:52 -- nvmf/common.sh@120 -- # set +e 00:20:50.766 13:49:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:50.766 13:49:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:50.766 rmmod nvme_tcp 00:20:50.766 rmmod nvme_fabrics 00:20:50.766 rmmod nvme_keyring 00:20:50.766 13:49:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:50.766 13:49:52 -- nvmf/common.sh@124 -- # set -e 00:20:50.766 13:49:52 -- nvmf/common.sh@125 -- # return 0 00:20:50.766 13:49:52 -- nvmf/common.sh@478 -- # '[' -n 2680022 ']' 00:20:50.766 13:49:52 -- nvmf/common.sh@479 -- # killprocess 2680022 00:20:50.766 13:49:52 -- common/autotest_common.sh@936 -- # '[' -z 2680022 ']' 00:20:50.766 13:49:52 -- common/autotest_common.sh@940 -- # kill -0 2680022 00:20:50.766 13:49:52 -- common/autotest_common.sh@941 -- # uname 00:20:50.766 13:49:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:50.766 13:49:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2680022 00:20:50.766 13:49:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:50.766 13:49:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:50.766 13:49:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2680022' 00:20:50.766 killing process with pid 2680022 00:20:50.766 13:49:52 -- common/autotest_common.sh@955 -- # kill 2680022 00:20:50.766 13:49:52 -- common/autotest_common.sh@960 -- # wait 2680022 00:20:50.766 13:49:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:50.766 13:49:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:50.766 13:49:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:50.766 13:49:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:50.766 13:49:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:50.766 13:49:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.766 13:49:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.766 13:49:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.668 13:49:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:52.668 00:20:52.668 real 0m23.381s 00:20:52.668 user 1m2.695s 00:20:52.668 sys 0m4.616s 00:20:52.668 13:49:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:52.668 13:49:55 -- common/autotest_common.sh@10 -- # set +x 00:20:52.668 ************************************ 00:20:52.668 END TEST nvmf_bdevperf 00:20:52.668 ************************************ 00:20:52.668 13:49:55 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:20:52.668 13:49:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:52.668 13:49:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:52.668 13:49:55 -- common/autotest_common.sh@10 -- # set +x 00:20:52.668 ************************************ 00:20:52.668 START TEST nvmf_target_disconnect 00:20:52.668 ************************************ 00:20:52.668 13:49:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:20:52.668 * Looking for test storage... 00:20:52.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:52.668 13:49:55 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:52.668 13:49:55 -- nvmf/common.sh@7 -- # uname -s 00:20:52.668 13:49:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.668 13:49:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.668 13:49:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.668 13:49:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.668 13:49:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.668 13:49:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.668 13:49:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.668 13:49:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.668 13:49:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.668 13:49:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.668 13:49:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:52.668 13:49:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:52.668 13:49:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.668 13:49:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.668 13:49:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:52.668 13:49:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:52.668 13:49:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:52.668 13:49:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.668 13:49:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.668 13:49:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.668 13:49:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.668 13:49:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.668 13:49:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.668 13:49:55 -- paths/export.sh@5 -- # export PATH 00:20:52.668 13:49:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.668 13:49:55 -- nvmf/common.sh@47 -- # : 0 00:20:52.668 13:49:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:52.668 13:49:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:52.668 13:49:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:52.668 13:49:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.668 13:49:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.668 13:49:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:52.668 13:49:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:52.668 13:49:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:52.668 13:49:55 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:52.668 13:49:55 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:20:52.668 13:49:55 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:20:52.668 13:49:55 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:20:52.668 13:49:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:52.668 13:49:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.668 13:49:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:52.668 13:49:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:52.668 13:49:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:52.668 13:49:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.668 13:49:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.668 13:49:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.668 13:49:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:52.668 13:49:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:52.668 13:49:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:52.668 13:49:55 -- common/autotest_common.sh@10 -- # set +x 00:20:54.564 13:49:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:54.564 13:49:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:54.564 13:49:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:54.564 13:49:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:54.564 13:49:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:54.564 13:49:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:54.564 13:49:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:54.564 13:49:57 -- nvmf/common.sh@295 -- # net_devs=() 00:20:54.564 13:49:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:54.564 13:49:57 -- nvmf/common.sh@296 -- # e810=() 00:20:54.564 13:49:57 -- nvmf/common.sh@296 -- # local -ga e810 00:20:54.564 13:49:57 -- nvmf/common.sh@297 -- # x722=() 00:20:54.564 13:49:57 -- nvmf/common.sh@297 -- # local -ga x722 00:20:54.564 13:49:57 -- nvmf/common.sh@298 -- # mlx=() 00:20:54.564 13:49:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:54.564 13:49:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:54.564 13:49:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:54.564 13:49:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:54.564 13:49:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:54.564 13:49:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:54.564 13:49:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:54.564 13:49:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:54.564 13:49:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:54.564 13:49:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:54.564 13:49:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:54.564 13:49:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:54.564 13:49:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:54.564 13:49:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:54.564 13:49:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:54.564 13:49:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:54.564 13:49:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:54.564 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:54.564 13:49:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:54.564 13:49:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:54.564 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:54.564 13:49:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:54.564 13:49:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:54.564 13:49:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.564 13:49:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:54.564 13:49:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.564 13:49:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:54.564 Found net devices under 0000:84:00.0: cvl_0_0 00:20:54.564 13:49:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.564 13:49:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:54.564 13:49:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.564 13:49:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:54.564 13:49:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.564 13:49:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:54.564 Found net devices under 0000:84:00.1: cvl_0_1 00:20:54.564 13:49:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.564 13:49:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:54.564 13:49:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:54.564 13:49:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:54.564 13:49:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:54.564 13:49:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.564 13:49:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.564 13:49:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:54.564 13:49:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:54.564 13:49:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:54.564 13:49:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:54.564 13:49:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:54.564 13:49:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:54.564 13:49:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.564 13:49:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:54.565 13:49:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:54.565 13:49:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:54.565 13:49:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:54.822 13:49:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:54.822 13:49:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:54.822 13:49:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:54.822 13:49:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:54.822 13:49:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:54.822 13:49:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:54.822 13:49:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:54.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:20:54.822 00:20:54.822 --- 10.0.0.2 ping statistics --- 00:20:54.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.822 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:20:54.822 13:49:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:54.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:20:54.822 00:20:54.822 --- 10.0.0.1 ping statistics --- 00:20:54.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.822 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:20:54.822 13:49:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.822 13:49:57 -- nvmf/common.sh@411 -- # return 0 00:20:54.822 13:49:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:54.822 13:49:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.822 13:49:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:54.822 13:49:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:54.822 13:49:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.822 13:49:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:54.822 13:49:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:54.822 13:49:57 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:20:54.822 13:49:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:54.822 13:49:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:54.822 13:49:57 -- common/autotest_common.sh@10 -- # set +x 00:20:54.822 ************************************ 00:20:54.822 START TEST nvmf_target_disconnect_tc1 00:20:54.822 ************************************ 00:20:54.822 13:49:57 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:20:54.822 13:49:57 -- host/target_disconnect.sh@32 -- # set +e 00:20:54.822 13:49:57 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:55.080 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.080 [2024-04-18 13:49:57.690217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.080 [2024-04-18 13:49:57.690478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.080 [2024-04-18 13:49:57.690510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1229610 with addr=10.0.0.2, port=4420 00:20:55.080 [2024-04-18 13:49:57.690551] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:55.080 [2024-04-18 13:49:57.690582] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:55.080 [2024-04-18 13:49:57.690598] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:20:55.080 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:20:55.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:20:55.080 Initializing NVMe Controllers 00:20:55.080 13:49:57 -- host/target_disconnect.sh@33 -- # trap - ERR 00:20:55.080 13:49:57 -- host/target_disconnect.sh@33 -- # print_backtrace 00:20:55.080 13:49:57 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:20:55.080 13:49:57 -- common/autotest_common.sh@1139 -- # return 0 00:20:55.080 13:49:57 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:20:55.080 13:49:57 -- host/target_disconnect.sh@41 -- # set -e 00:20:55.080 00:20:55.080 real 0m0.098s 00:20:55.080 user 0m0.040s 00:20:55.080 sys 0m0.057s 00:20:55.080 13:49:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:55.080 13:49:57 -- common/autotest_common.sh@10 -- # set +x 00:20:55.080 ************************************ 00:20:55.080 END TEST nvmf_target_disconnect_tc1 00:20:55.080 ************************************ 00:20:55.080 13:49:57 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:20:55.080 13:49:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:55.080 13:49:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:55.080 13:49:57 -- common/autotest_common.sh@10 -- # set +x 00:20:55.080 ************************************ 00:20:55.080 START TEST nvmf_target_disconnect_tc2 00:20:55.080 ************************************ 00:20:55.080 13:49:57 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:20:55.080 13:49:57 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:20:55.080 13:49:57 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:55.080 13:49:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:55.080 13:49:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:55.080 13:49:57 -- common/autotest_common.sh@10 -- # set +x 00:20:55.080 13:49:57 -- nvmf/common.sh@470 -- # nvmfpid=2683206 00:20:55.080 13:49:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:55.080 13:49:57 -- nvmf/common.sh@471 -- # waitforlisten 2683206 00:20:55.080 13:49:57 -- common/autotest_common.sh@817 -- # '[' -z 2683206 ']' 00:20:55.080 13:49:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.080 13:49:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:55.080 13:49:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.080 13:49:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:55.080 13:49:57 -- common/autotest_common.sh@10 -- # set +x 00:20:55.080 [2024-04-18 13:49:57.879691] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:20:55.081 [2024-04-18 13:49:57.879769] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.339 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.339 [2024-04-18 13:49:57.949669] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:55.339 [2024-04-18 13:49:58.053742] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.339 [2024-04-18 13:49:58.053802] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.339 [2024-04-18 13:49:58.053825] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.339 [2024-04-18 13:49:58.053836] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.339 [2024-04-18 13:49:58.053849] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.339 [2024-04-18 13:49:58.053931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:55.339 [2024-04-18 13:49:58.054040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:55.339 [2024-04-18 13:49:58.054124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:20:55.339 [2024-04-18 13:49:58.054127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:56.271 13:49:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:56.271 13:49:58 -- common/autotest_common.sh@850 -- # return 0 00:20:56.271 13:49:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:56.271 13:49:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:56.271 13:49:58 -- common/autotest_common.sh@10 -- # set +x 00:20:56.271 13:49:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.271 13:49:58 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:56.271 13:49:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.271 13:49:58 -- common/autotest_common.sh@10 -- # set +x 00:20:56.271 Malloc0 00:20:56.271 13:49:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.271 13:49:58 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:56.271 13:49:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.271 13:49:58 -- common/autotest_common.sh@10 -- # set +x 00:20:56.271 [2024-04-18 13:49:58.853359] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.271 13:49:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.271 13:49:58 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:56.271 13:49:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.271 13:49:58 -- common/autotest_common.sh@10 -- # set +x 00:20:56.271 13:49:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.271 13:49:58 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:56.271 13:49:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.271 13:49:58 -- common/autotest_common.sh@10 -- # set +x 00:20:56.271 13:49:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.271 13:49:58 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:56.271 13:49:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.271 13:49:58 -- common/autotest_common.sh@10 -- # set +x 00:20:56.271 [2024-04-18 13:49:58.881603] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.271 13:49:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.271 13:49:58 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:56.271 13:49:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.271 13:49:58 -- common/autotest_common.sh@10 -- # set +x 00:20:56.271 13:49:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.271 13:49:58 -- host/target_disconnect.sh@50 -- # reconnectpid=2683364 00:20:56.271 13:49:58 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:56.271 13:49:58 -- host/target_disconnect.sh@52 -- # sleep 2 00:20:56.271 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.200 13:50:00 -- host/target_disconnect.sh@53 -- # kill -9 2683206 00:20:58.200 13:50:00 -- host/target_disconnect.sh@55 -- # sleep 2 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Write completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Write completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Write completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Write completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Write completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.200 Read completed with error (sct=0, sc=8) 00:20:58.200 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 [2024-04-18 13:50:00.909612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 [2024-04-18 13:50:00.909957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Read completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 Write completed with error (sct=0, sc=8) 00:20:58.201 starting I/O failed 00:20:58.201 [2024-04-18 13:50:00.910318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:58.201 [2024-04-18 13:50:00.910548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.910683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.910707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.201 qpair failed and we were unable to recover it. 00:20:58.201 [2024-04-18 13:50:00.910875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.911032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.911074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.201 qpair failed and we were unable to recover it. 00:20:58.201 [2024-04-18 13:50:00.911222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.911408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.911434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.201 qpair failed and we were unable to recover it. 00:20:58.201 [2024-04-18 13:50:00.911613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.911814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.911837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.201 qpair failed and we were unable to recover it. 00:20:58.201 [2024-04-18 13:50:00.912016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.912190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.912217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.201 qpair failed and we were unable to recover it. 00:20:58.201 [2024-04-18 13:50:00.912363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.912488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.912513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.201 qpair failed and we were unable to recover it. 00:20:58.201 [2024-04-18 13:50:00.912675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.912791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.912815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.201 qpair failed and we were unable to recover it. 00:20:58.201 [2024-04-18 13:50:00.912977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.913173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.913206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.201 qpair failed and we were unable to recover it. 00:20:58.201 [2024-04-18 13:50:00.913330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.913456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.201 [2024-04-18 13:50:00.913495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.201 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.913646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.913835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.913860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.913975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.914184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.914209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.914372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.914528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.914552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.914703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.914864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.914888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.915081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.915255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.915282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.915439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.915618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.915641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.915821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.916018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.916041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.916200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.916347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.916377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.916515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.916708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.916756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.916931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.917109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.917133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.917283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.917431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.917458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.917598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.917806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.917829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.918021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.918191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.918218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.918347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.918496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.918524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.918696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.918872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.918920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.919055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.919202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.919229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.919382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.919547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.919576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.919791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.919957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.919986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.920189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.920323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.920348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.920503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.920717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.920768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.920971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.921123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.921146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.921298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.921451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.921490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.202 qpair failed and we were unable to recover it. 00:20:58.202 [2024-04-18 13:50:00.921693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.921882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.202 [2024-04-18 13:50:00.921936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.922107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.922262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.922289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.922446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.922639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.922691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.922839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.923007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.923030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.923153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.923330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.923356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.923509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.923682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.923708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.923849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.924021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.924044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.924199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.924355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.924381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.924549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.924741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.924787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.924953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.925097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.925120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.925310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.925437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.925463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.926449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.926654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.926703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.926867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.927075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.927098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.927307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.927436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.927462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.927643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.927821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.927868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.928004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.928200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.928231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.928384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.928558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.928626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.928767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.928927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.928978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.929121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.929327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.929353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.929538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.929727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.929784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.929932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.930107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.203 [2024-04-18 13:50:00.930130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.203 qpair failed and we were unable to recover it. 00:20:58.203 [2024-04-18 13:50:00.930303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.930456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.930495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.930676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.930883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.930937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.931066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.931221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.931247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.931392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.931553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.931582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.931748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.931938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.931974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.932131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.932295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.932321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.932499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.932695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.932750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.932936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.933057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.933079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.933251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.933393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.933435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.933639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.933831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.933885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.934075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.934236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.934266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.934415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.934574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.934616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.934759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.934965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.934988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.935187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.935325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.935368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.935480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.935681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.935704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.935880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.936044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.936067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.936246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.936409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.936435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.936569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.936692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.936714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.936889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.937035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.937058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.937184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.937300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.937325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.204 qpair failed and we were unable to recover it. 00:20:58.204 [2024-04-18 13:50:00.937493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.204 [2024-04-18 13:50:00.937607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.937630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.937801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.937909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.937932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.938064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.938204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.938229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.938390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.938569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.938610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.938789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.938947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.938984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.939145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.939351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.939393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.939595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.939774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.939816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.939960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.940134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.940171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.940318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.940489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.940530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.940675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.940813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.940855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.941001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.941148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.941193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.941358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.941560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.941618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.941789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.942006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.942057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.942235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.942370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.942399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.942562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.942772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.942821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.942982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.943130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.943153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.943317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.943493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.943537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.943689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.943894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.943942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.944127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.944316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.944359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.944556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.944758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.944822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.944974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.945122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.205 [2024-04-18 13:50:00.945145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.205 qpair failed and we were unable to recover it. 00:20:58.205 [2024-04-18 13:50:00.945387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.945575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.945633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.206 qpair failed and we were unable to recover it. 00:20:58.206 [2024-04-18 13:50:00.945799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.945999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.946021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.206 qpair failed and we were unable to recover it. 00:20:58.206 [2024-04-18 13:50:00.946222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.946394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.946437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.206 qpair failed and we were unable to recover it. 00:20:58.206 [2024-04-18 13:50:00.946606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.946825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.946878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.206 qpair failed and we were unable to recover it. 00:20:58.206 [2024-04-18 13:50:00.947070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.947189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.947214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.206 qpair failed and we were unable to recover it. 00:20:58.206 [2024-04-18 13:50:00.947410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.947603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.947657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.206 qpair failed and we were unable to recover it. 00:20:58.206 [2024-04-18 13:50:00.947853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.948026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.948048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.206 qpair failed and we were unable to recover it. 00:20:58.206 [2024-04-18 13:50:00.948197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.948386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.948428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.206 qpair failed and we were unable to recover it. 00:20:58.206 [2024-04-18 13:50:00.948613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.948843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.948894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.206 qpair failed and we were unable to recover it. 00:20:58.206 [2024-04-18 13:50:00.949058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.949214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.949256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.206 qpair failed and we were unable to recover it. 00:20:58.206 [2024-04-18 13:50:00.949415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.949624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.949673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.206 qpair failed and we were unable to recover it. 00:20:58.206 [2024-04-18 13:50:00.949796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.949987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.950009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.206 qpair failed and we were unable to recover it. 00:20:58.206 [2024-04-18 13:50:00.950188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.950319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.950361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.206 qpair failed and we were unable to recover it. 00:20:58.206 [2024-04-18 13:50:00.950509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.950664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.950706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.206 qpair failed and we were unable to recover it. 00:20:58.206 [2024-04-18 13:50:00.950884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.951045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.951082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.206 qpair failed and we were unable to recover it. 00:20:58.206 [2024-04-18 13:50:00.951276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.951451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.206 [2024-04-18 13:50:00.951475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.951636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.951784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.951821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.951988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.952182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.952206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.952385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.952583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.952623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.952758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.952929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.952975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.953122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.953326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.953369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.953524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.953742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.953805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.953938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.954080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.954102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.954302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.954492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.954546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.954713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.954886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.954925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.955107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.955256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.955281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.955436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.955645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.955697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.955863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.956006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.956029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.956203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.956372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.956414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.956569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.956702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.956743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.956895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.957037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.957060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.957234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.957406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.957428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.957568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.957715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.957752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.957903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.958044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.958067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.958220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.958382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.958424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.958585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.958773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.207 [2024-04-18 13:50:00.958828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.207 qpair failed and we were unable to recover it. 00:20:58.207 [2024-04-18 13:50:00.959021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.959197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.959222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.959366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.959579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.959629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.959800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.960002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.960024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.960233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.960388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.960431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.960590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.960778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.960834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.961021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.961196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.961220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.961422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.961608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.961669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.961800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.961939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.961961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.962129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.962293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.962334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.962490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.962660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.962701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.962847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.963023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.963046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.963212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.963397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.963438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.963622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.963835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.963882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.964058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.964166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.964220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.964392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.964579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.964639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.964821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.964980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.965016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.965174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.965353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.965394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.965576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.965703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.965745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.965901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.966093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.966129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.966264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.966411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.966435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.208 qpair failed and we were unable to recover it. 00:20:58.208 [2024-04-18 13:50:00.966564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.208 [2024-04-18 13:50:00.966733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.966772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.966961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.967106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.967143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.967283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.967448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.967473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.967662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.967830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.967883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.968065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.968233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.968257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.968421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.968614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.968676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.968855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.969014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.969050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.969169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.969367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.969410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.969590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.969779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.969835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.970031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.970150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.970173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.970340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.970502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.970543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.970727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.970940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.970991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.971172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.971331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.971372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.971542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.971704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.971732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.971901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.972064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.972087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.972256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.972441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.972483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.972630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.972822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.972876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.973026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.973166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.973211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.973408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.973635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.973688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.973885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.974073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.974095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.974289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.974438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.974482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.209 qpair failed and we were unable to recover it. 00:20:58.209 [2024-04-18 13:50:00.974606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.209 [2024-04-18 13:50:00.974822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.974884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.975024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.975186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.975209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.975402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.975568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.975591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.975777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.975905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.975947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.976115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.976230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.976254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.976422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.976585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.976625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.976820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.976993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.977015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.977141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.977281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.977331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.977487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.977687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.977725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.977922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.978095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.978117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.978305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.978531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.978589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.978773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.978935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.978975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.979151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.979321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.979362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.979531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.979752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.979807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.979992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.980137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.980173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.980363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.980571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.980618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.980782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.980988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.981028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.981212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.981381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.981425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.981576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.981789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.981835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.982029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.982199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.982238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.982380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.982537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.982579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.982753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.982968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.983019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.210 [2024-04-18 13:50:00.983222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.983412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.210 [2024-04-18 13:50:00.983466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.210 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.983650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.983842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.983895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.984063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.984236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.984279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.984433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.984607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.984650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.984794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.984954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.984977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.985138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.985292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.985319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.985481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.985688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.985742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.985907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.986065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.986102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.986294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.986451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.986493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.986642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.986788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.986816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.986947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.987117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.987139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.987307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.987489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.987513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.987722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.987875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.987917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.988068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.988268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.988297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.988438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.988605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.988641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.988770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.988911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.988934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.989099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.989254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.989279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.989427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.989616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.989669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.989852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.990040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.990063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.990221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.990364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.990392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.990553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.990757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.990807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.990970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.991148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.991171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.991378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.991547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.991588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.991735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.991902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.991944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.992113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.992274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.992303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.211 qpair failed and we were unable to recover it. 00:20:58.211 [2024-04-18 13:50:00.992474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.992674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.211 [2024-04-18 13:50:00.992716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.992867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.993007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.993030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.993204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.993375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.993416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.993597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.993811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.993861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.993994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.994121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.994144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.994324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.994497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.994525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.994759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.994922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.994943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.995107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.995291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.995334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.995506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.995666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.995695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.995849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.996015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.996038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.996153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.996316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.996358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.996566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.996754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.996808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.997001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.997173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.997201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.997362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.997557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.997626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.997785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.997956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.997984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.998121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.998231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.998256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.998413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.998546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.998585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.998741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.998910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.998934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.999068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.999215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.999239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.999378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.999533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.999561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:00.999786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.999939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:00.999978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:01.000157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:01.000342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:01.000385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.212 qpair failed and we were unable to recover it. 00:20:58.212 [2024-04-18 13:50:01.000504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.212 [2024-04-18 13:50:01.000672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.213 [2024-04-18 13:50:01.000715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.213 qpair failed and we were unable to recover it. 00:20:58.213 [2024-04-18 13:50:01.000872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.213 [2024-04-18 13:50:01.001019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.213 [2024-04-18 13:50:01.001055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.213 qpair failed and we were unable to recover it. 00:20:58.213 [2024-04-18 13:50:01.001220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.213 [2024-04-18 13:50:01.001400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.213 [2024-04-18 13:50:01.001441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.213 qpair failed and we were unable to recover it. 00:20:58.213 [2024-04-18 13:50:01.001608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.213 [2024-04-18 13:50:01.001802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.213 [2024-04-18 13:50:01.001870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.213 qpair failed and we were unable to recover it. 00:20:58.213 [2024-04-18 13:50:01.002067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.002263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.002293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.495 qpair failed and we were unable to recover it. 00:20:58.495 [2024-04-18 13:50:01.002468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.002650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.002679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.495 qpair failed and we were unable to recover it. 00:20:58.495 [2024-04-18 13:50:01.002840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.003036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.003061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.495 qpair failed and we were unable to recover it. 00:20:58.495 [2024-04-18 13:50:01.003235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.003381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.003407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.495 qpair failed and we were unable to recover it. 00:20:58.495 [2024-04-18 13:50:01.003553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.003697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.003721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.495 qpair failed and we were unable to recover it. 00:20:58.495 [2024-04-18 13:50:01.003846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.004032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.004069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.495 qpair failed and we were unable to recover it. 00:20:58.495 [2024-04-18 13:50:01.004218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.004385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.004409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.495 qpair failed and we were unable to recover it. 00:20:58.495 [2024-04-18 13:50:01.004604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.004772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.004812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.495 qpair failed and we were unable to recover it. 00:20:58.495 [2024-04-18 13:50:01.004967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.005138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.005183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.495 qpair failed and we were unable to recover it. 00:20:58.495 [2024-04-18 13:50:01.005363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.005563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.005620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.495 qpair failed and we were unable to recover it. 00:20:58.495 [2024-04-18 13:50:01.005795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.005958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.005995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.495 qpair failed and we were unable to recover it. 00:20:58.495 [2024-04-18 13:50:01.006156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.006318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.006361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.495 qpair failed and we were unable to recover it. 00:20:58.495 [2024-04-18 13:50:01.006543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.006733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.006797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.495 qpair failed and we were unable to recover it. 00:20:58.495 [2024-04-18 13:50:01.006971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.495 [2024-04-18 13:50:01.007100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.007123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.007328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.007541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.007589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.007756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.007925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.007964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.008106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.008285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.008330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.008457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.008624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.008664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.008801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.009003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.009026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.009170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.009375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.009417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.009569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.009780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.009829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.009995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.010156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.010185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.010388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.010571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.010626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.010812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.011025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.011073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.011246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.011433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.011462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.011655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.011842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.011899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.012025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.012171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.012202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.012393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.012557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.012580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.012731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.012880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.012921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.013067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.013238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.013267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.013469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.013638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.013661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.013802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.013939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.013962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.014150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.014344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.014387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.014526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.014694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.014736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.014892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.015059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.015082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.015265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.015413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.015457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.015644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.015842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.015896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.016041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.016230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.016255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.016418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.016609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.016663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.016801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.016966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.017003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.017162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.017320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.017350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.017508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.017640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.017681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.017828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.018004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.018041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.018233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.018416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.018438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.018599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.018745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.018773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.018914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.019059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.019082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.019228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.019367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.019413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.496 qpair failed and we were unable to recover it. 00:20:58.496 [2024-04-18 13:50:01.019598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.496 [2024-04-18 13:50:01.019787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.019842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.020030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.020183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.020207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.020398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.020588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.020646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.020815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.020988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.021012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.021189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.021354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.021382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.021546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.021687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.021711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.021882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.022072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.022098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.022246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.022422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.022451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.022650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.022884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.022935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.023083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.023245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.023274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.023467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.023646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.023687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.023832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.023992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.024015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.024159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.024350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.024393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.024541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.024740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.024782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.024938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.025099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.025122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.025295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.025495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.025534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.025663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.025832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.025873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.026044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.026192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.026221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.026370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.026566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.026624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.026805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.026961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.026998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.027190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.027389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.027430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.027561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.027752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.027816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.027949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.028140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.028163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.028354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.028518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.028559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.028695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.028857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.028880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.029049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.029235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.029264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.029478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.029621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.029664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.029807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.029972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.030008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.030136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.030296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.030339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.030487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.030642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.030670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.030857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.031030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.031066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.031230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.031397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.031436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.031610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.031825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.031878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.497 qpair failed and we were unable to recover it. 00:20:58.497 [2024-04-18 13:50:01.032035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.497 [2024-04-18 13:50:01.032193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.032234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.032379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.032553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.032582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.032756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.032924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.032946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.033109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.033284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.033327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.033473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.033615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.033657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.033832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.033997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.034024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.034187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.034360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.034401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.034531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.034685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.034725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.034865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.035004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.035027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.035192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.035337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.035378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.035543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.035687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.035730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.035885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.036038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.036074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.036255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.036427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.036472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.036604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.036800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.036842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.036988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.037161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.037207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.037340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.037516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.037561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.037704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.037906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.037964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.038149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.038318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.038361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.038532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.038737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.038783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.038981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.039171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.039216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.039391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.039596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.039635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.039767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.039975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.040017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.040211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.040339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.040382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.040529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.040698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.040738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.040921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.041114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.041136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.041300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.041477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.041525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.041702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.041834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.041875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.042032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.042189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.042229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.042397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.042610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.042663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.042859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.042992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.043028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.043195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.043375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.043417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.043554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.043723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.043764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.043912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.044061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.044097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.044252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.044389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.044432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.498 [2024-04-18 13:50:01.044593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.044774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.498 [2024-04-18 13:50:01.044796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.498 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.044927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.045084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.045111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.045301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.045437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.045466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.045623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.045825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.045853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.045990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.046129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.046152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.046312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.046527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.046581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.046714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.046832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.046855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.047005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.047150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.047172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.047338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.047514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.047555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.047731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.047881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.047965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.048112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.048307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.048336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.048483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.048650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.048689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.048831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.048966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.048990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.049200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.049347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.049389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.049570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.049703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.049745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.049898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.050042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.050065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.050250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.050410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.050434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.050588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.050798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.050845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.051003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.051187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.051211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.051368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.051523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.051546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.051709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.051896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.051960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.052089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.052248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.052273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.052432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.052620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.052643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.052828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.052984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.053007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.053163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.053372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.053414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.053592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.053763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.053805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.053931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.054106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.054129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.054321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.054472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.054514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.054649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.054822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.499 [2024-04-18 13:50:01.054866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.499 qpair failed and we were unable to recover it. 00:20:58.499 [2024-04-18 13:50:01.055045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.055201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.055224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.055375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.055556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.055598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.055737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.055904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.055927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.056122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.056294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.056337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.056523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.056710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.056764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.056959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.057098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.057135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.057270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.057418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.057460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.057640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.057824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.057882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.058005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.058141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.058163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.058336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.058457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.058479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.058665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.058819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.058863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.059005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.059152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.059198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.059351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.059520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.059548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.059746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.059916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.059955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.060108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.060260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.060285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.060432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.060576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.060604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.060770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.060948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.060970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.061162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.061371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.061414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.061542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.061736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.061758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.061928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.062127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.062149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.062319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.062498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.062538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.062683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.062922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.062972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.063121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.063283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.063313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.063476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.063667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.063695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.063853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.064035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.064057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.064218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.064363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.064405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.064574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.064723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.064766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.064930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.065067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.065089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.065249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.065375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.065417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.065550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.065688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.065710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.065895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.065995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.066017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.066150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.066283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.066307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.066472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.066623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.066659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.066797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.066963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.066985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.067105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.067249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.067273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.500 qpair failed and we were unable to recover it. 00:20:58.500 [2024-04-18 13:50:01.067398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.500 [2024-04-18 13:50:01.067544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.067567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.067725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.067864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.067887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.068006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.068118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.068141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.068306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.068466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.068490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.068628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.068757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.068779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.068905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.069073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.069096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.069248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.069381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.069405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.069554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.069674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.069703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.069859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.069956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.069979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.070133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.070269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.070298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.070446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.070560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.070583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.070735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.070847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.070870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.071011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.071150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.071173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.071354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.071519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.071541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.071706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.071875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.071916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.072053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.072172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.072203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.072406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.072577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.072618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.072749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.072879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.072902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.073083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.073210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.073235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.073387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.073525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.073562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.073683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.073854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.073877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.074059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.074223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.074248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.074385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.074586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.074638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.074767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.074927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.074950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.075067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.075210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.075235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.075390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.075542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.075579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.075738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.075867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.075890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.076030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.076183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.076207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.076362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.076493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.076521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.076677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.076817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.076840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.077023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.077162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.077213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.077368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.077554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.077614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.077741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.077899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.077921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.078106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.078268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.078311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.078456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.078646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.078675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.078845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.079013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.501 [2024-04-18 13:50:01.079036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.501 qpair failed and we were unable to recover it. 00:20:58.501 [2024-04-18 13:50:01.079173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.079305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.079328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.079495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.079628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.079667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.079846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.080009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.080045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.080193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.080353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.080396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.080540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.080661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.080689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.080843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.080986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.081010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.081131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.081252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.081291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.081502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.081658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.081708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.081852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.082003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.082025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.082171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.082298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.082340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.082472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.082669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.082697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.082858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.083032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.083054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.083230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.083371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.083413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.083535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.083674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.083697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.083830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.083969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.083992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.084188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.084316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.084353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.084494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.084639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.084661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.084778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.084912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.084935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.085107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.085214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.085238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.085405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.085569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.085611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.085796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.085925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.085962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.086100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.086366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.086408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.086622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.086832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.086882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.087020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.087157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.087211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.087350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.087519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.087561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.087706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.087873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.087915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.088105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.088238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.088262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.088435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.088583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.088635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.088795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.089023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.089056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.089187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.089333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.089375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.089536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.089715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.089738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.089903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.090064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.090087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.090300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.090414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.090441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.090601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.090736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.090777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.090937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.091043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.091066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.502 qpair failed and we were unable to recover it. 00:20:58.502 [2024-04-18 13:50:01.091252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.091394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.502 [2024-04-18 13:50:01.091431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.091593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.091754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.091790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.092015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.092128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.092164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.092298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.092439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.092463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.092602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.092752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.092809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.092923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.093064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.093087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.093269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.093435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.093459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.093623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.093791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.093830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.094008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.094123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.094145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.094345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.094559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.094617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.094788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.094945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.094983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.095121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.095287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.095330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.095470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.095621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.095672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.095802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.095977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.096013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.096188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.096430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.096459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.096655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.096813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.096840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.097029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.097208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.097232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.097421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.097615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.097659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.097811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.097994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.098036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.098170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.098347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.098392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.098556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.098740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.098790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.098959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.099150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.099171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.099383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.099514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.099559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.099704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.099835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.099857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.100011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.100249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.100286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.100468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.100680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.100768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.100904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.101066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.101088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.101257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.101403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.101449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.101626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.101795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.101840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.101994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.102196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.102219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.102347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.102541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.102570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.102761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.102889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.102911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.103064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.103238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.503 [2024-04-18 13:50:01.103262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.503 qpair failed and we were unable to recover it. 00:20:58.503 [2024-04-18 13:50:01.103424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.103572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.103614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.103750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.103903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.103926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.104091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.104213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.104236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.104361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.104522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.104545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.104699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.104863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.104886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.105054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.105158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.105189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.105348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.105505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.105546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.105672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.105856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.105905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.106043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.106182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.106221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.106406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.106572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.106640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.106811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.107012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.107034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.107193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.107320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.107348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.107540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.107680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.107744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.107917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.108049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.108072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.108217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.108375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.108404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.108611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.108795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.108845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.108994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.109138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.109161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.109287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.109429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.109470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.109625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.109784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.109829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.109993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.110106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.110129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.110294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.110459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.110482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.110632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.110776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.110816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.110951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.111091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.111113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.111276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.111419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.111448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.111644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.111807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.111835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.111990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.112125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.112148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.112316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.112523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.112573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.112693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.112844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.112866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.113046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.113161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.113190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.113369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.113511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.113575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.113748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.113907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.113931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.114082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.114206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.114246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.114438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.114581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.114608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.114767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.115000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.115023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.115205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.115433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.115492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.115661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.115834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.115894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.116057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.116194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.116217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.116381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.116519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.116561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.116753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.116916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.116938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.504 [2024-04-18 13:50:01.117083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.117251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.504 [2024-04-18 13:50:01.117289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.504 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.117399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.117566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.117589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.117748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.117885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.117908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.118050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.118199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.118222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.118399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.118617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.118666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.118828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.118959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.118981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.119152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.119335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.119379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.119531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.119733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.119786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.119935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.120082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.120105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.120232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.120368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.120397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.120596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.120767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.120831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.120964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.121106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.121128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.121276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.121435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.121464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.121646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.121800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.121822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.121959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.122098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.122121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.122257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.122423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.122447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.122641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.122792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.122841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.122956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.123100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.123123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.123285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.123432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.123456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.123590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.123703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.123726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.123854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.123998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.124022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.124161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.124345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.124382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.124533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.124695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.124736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.124939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.125065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.125087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.125197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.125346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.125387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.125548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.125771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.125813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.125925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.126032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.126063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.126210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.126347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.126389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.126516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.126753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.126794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.126964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.127099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.127121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.127276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.127413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.127440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.127602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.127803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.127844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.127990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.128130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.128153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.128299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.128462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.128503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.128664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.128815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.128858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.129050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.129184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.129208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.129391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.129617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.129657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.129835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.130045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.130067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.130212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.130322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.130365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.130500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.130681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.130704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.130910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.131116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.505 [2024-04-18 13:50:01.131139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.505 qpair failed and we were unable to recover it. 00:20:58.505 [2024-04-18 13:50:01.131313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.131482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.131525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.131668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.131865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.131908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.132025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.132214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.132239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.132373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.132557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.132598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.132790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.132974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.132997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.133171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.133337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.133378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.133533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.133764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.133818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.134005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.134142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.134164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.134337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.134528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.134569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.134683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.134907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.134930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.135076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.135187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.135211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.135415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.135540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.135563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.135714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.135949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.135990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.136130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.136334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.136377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.136532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.136753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.136795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.136938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.137171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.137204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.137334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.137503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.137543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.137760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.137918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.137960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.138111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.138242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.138284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.138412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.138615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.138666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.138804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.138972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.138995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.139219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.139418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.139458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.139618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.139804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.139855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.140040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.140254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.140296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.140492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.140636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.140678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.140823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.140993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.141015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.141206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.141379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.141422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.141588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.141800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.141841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.141978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.142105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.142128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.142328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.142526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.142567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.142739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.142975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.143022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.143161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.143380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.143422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 [2024-04-18 13:50:01.143555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.143753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.506 [2024-04-18 13:50:01.143795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:58.506 qpair failed and we were unable to recover it. 00:20:58.506 Read completed with error (sct=0, sc=8) 00:20:58.506 starting I/O failed 00:20:58.506 Read completed with error (sct=0, sc=8) 00:20:58.506 starting I/O failed 00:20:58.506 Read completed with error (sct=0, sc=8) 00:20:58.506 starting I/O failed 00:20:58.506 Read completed with error (sct=0, sc=8) 00:20:58.506 starting I/O failed 00:20:58.506 Write completed with error (sct=0, sc=8) 00:20:58.506 starting I/O failed 00:20:58.506 Read completed with error (sct=0, sc=8) 00:20:58.506 starting I/O failed 00:20:58.506 Write completed with error (sct=0, sc=8) 00:20:58.506 starting I/O failed 00:20:58.506 Write completed with error (sct=0, sc=8) 00:20:58.506 starting I/O failed 00:20:58.506 Read completed with error (sct=0, sc=8) 00:20:58.506 starting I/O failed 00:20:58.507 Read completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Read completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Write completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Read completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Write completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Read completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Read completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Write completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Read completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Read completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Read completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Read completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Write completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Read completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Write completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Write completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Write completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Read completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Read completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Read completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Write completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Write completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 Write completed with error (sct=0, sc=8) 00:20:58.507 starting I/O failed 00:20:58.507 [2024-04-18 13:50:01.144073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:58.507 [2024-04-18 13:50:01.144219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185c910 is same with the state(5) to be set 00:20:58.507 [2024-04-18 13:50:01.144476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.144643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.144675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.144803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.144967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.144996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.145183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.145353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.145378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.145494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.145683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.145712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.145888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.146040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.146068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.146218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.146361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.146387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.146556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.146707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.146738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.146874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.147029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.147058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.147242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.147389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.147415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.147576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.147750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.147778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.147930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.148080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.148108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.148269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.148440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.148480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.148605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.148748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.148777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.148956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.149131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.149159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.149329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.149447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.149486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.149646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.149826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.149854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.150029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.150175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.150222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.150337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.150484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.150508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.150699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.150817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.150846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.150996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.152197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.152242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.152411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.152585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.152629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.152781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.152937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.152965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.153142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.153276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.153302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.153482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.153632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.153662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.153782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.153954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.153983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.154100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.154237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.154263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.154401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.157191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.157223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.157381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.157566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.157594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.157789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.157941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.157969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.158155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.158338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.158367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.158524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.158702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.158730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.158855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.159070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.159101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.507 qpair failed and we were unable to recover it. 00:20:58.507 [2024-04-18 13:50:01.159301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.507 [2024-04-18 13:50:01.159460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.159489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.159616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.159798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.159826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.160000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.160185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.160215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.160364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.160478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.160514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.160675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.160802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.160826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.160990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.161107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.161132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.161313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.161497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.161522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.161676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.161833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.161872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.162021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.162125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.162149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.162349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.162501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.162525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.162644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.162757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.162781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.162938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.163115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.163139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.163295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.163437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.163463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.163586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.163748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.163772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.163924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.164068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.164093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.164258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.164483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.164520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.164677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.164795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.164834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.164949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.165087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.165112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.165263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.165405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.165431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.165626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.165769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.165792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.165969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.166118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.166142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.166261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.166381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.166406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.166568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.166710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.166733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.166913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.167096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.167133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.167265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.167408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.167433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.167603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.167796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.167819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.167984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.168092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.168116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.168245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.168387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.168412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.168575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.168740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.168778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.168942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.169102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.169142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.169341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.169475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.169514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.169666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.169783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.169807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.169972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.170085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.170113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.170256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.170374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.170399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.170525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.170644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.170668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.170837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.170994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.171023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.171184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.171343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.171368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.171544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.171682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.171710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.171865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.172006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.172034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.172198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.172320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.508 [2024-04-18 13:50:01.172347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.508 qpair failed and we were unable to recover it. 00:20:58.508 [2024-04-18 13:50:01.172475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.172600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.172624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.172779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.172922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.172950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.173112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.173246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.173272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.173439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.173580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.173604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.173748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.173898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.173926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.174087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.174244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.174270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.174397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.174556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.174581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.174705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.174826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.174853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.175017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.175155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.175199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.175355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.175494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.175533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.175676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.175796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.175824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.175966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.176087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.176109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.176250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.176391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.176416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.176548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.176664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.176692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.176842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.176975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.176998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.177161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.177327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.177356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.177510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.177626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.177654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.177819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.177987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.178010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.178173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.178344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.178383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.178523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.178648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.178676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.178810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.178936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.178960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.179117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.179247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.179273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.179437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.179625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.179681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.179825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.179991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.180013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.180145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.180321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.180347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.180486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.180633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.180665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.180800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.180932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.180955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.181081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.181231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.181260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.181410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.181551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.181578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.181741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.181855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.181878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.182062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.182204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.182233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.182352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.182542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.182569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.182724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.182884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.182907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.183035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.183188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.183214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.509 qpair failed and we were unable to recover it. 00:20:58.509 [2024-04-18 13:50:01.183385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.509 [2024-04-18 13:50:01.183500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.183525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.183661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.183840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.183866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.184024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.184131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.184154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.184317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.184484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.184508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.184676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.184792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.184814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.184944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.185087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.185110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.185239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.185356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.185379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.185574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.185687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.185711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.185838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.185967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.185992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.186155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.186281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.186304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.186422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.186573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.186612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.186764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.186911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.186938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.187065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.187221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.187248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.187406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.187564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.187587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.187741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.187850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.187873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.188100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.188256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.188286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.188479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.188659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.188682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.188826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.188945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.188968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.189145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.189306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.189335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.189475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.189641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.189665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.189795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.189958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.189982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.190169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.190337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.190366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.190507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.190658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.190681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.190917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.191080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.191118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.191258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.191425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.191467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.191606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.191775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.191798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.191927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.192118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.192156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.192310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.192422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.192462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.192613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.192721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.192743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.192918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.193037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.193065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.193293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.193415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.193444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.193582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.193721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.193745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.193892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.194034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.194061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.194237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.194383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.194411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.194566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.194712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.194749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.194965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.195113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.195141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.195337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.195474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.195499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.195659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.195808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.195830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.195968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.196139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.196163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.196327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.196483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.510 [2024-04-18 13:50:01.196507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.510 qpair failed and we were unable to recover it. 00:20:58.510 [2024-04-18 13:50:01.196637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.196791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.196814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.196938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.197100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.197128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.197329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.197477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.197500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.197634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.197790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.197812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.197969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.198080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.198108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.198253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.198446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.198474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.198648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.198809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.198849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.199024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.199194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.199255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.199384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.199560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.199587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.199769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.199906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.199929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.200089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.200235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.200265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.200417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.200573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.200600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.200799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.200939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.200968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.201084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.201229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.201259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.201424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.201568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.201596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.201770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.201961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.201986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.202144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.202297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.202326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.202477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.202629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.202657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.202813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.202977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.203000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.203152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.203319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.203346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.203491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.203627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.203666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.203801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.203953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.203976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.204150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.204317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.204343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.204452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.204618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.204646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.204824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.204964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.204986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.205136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.205326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.205353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.205478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.205613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.205642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.205806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.205972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.205995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.206121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.206295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.206321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.206448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.206582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.206611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.206762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.206903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.206926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.207089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.207207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.207236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.207422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.207583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.207651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.207806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.207953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.207976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.208139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.208328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.208357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.208509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.208635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.208663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.208876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.209004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.209041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.209245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.209391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.209434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.209556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.209723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.209752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.209933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.210097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.511 [2024-04-18 13:50:01.210120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.511 qpair failed and we were unable to recover it. 00:20:58.511 [2024-04-18 13:50:01.210304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.210509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.210537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.210673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.210842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.210881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.211040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.211273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.211302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.211444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.211632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.211661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.211810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.212015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.212045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.212231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.212423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.212450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.212640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.212857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.212886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.213079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.213276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.213305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.213488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.213616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.213639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.213786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.213980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.214005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.214193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.214396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.214422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.214644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.214882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.214932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.215153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.215335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.215365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.215567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.215767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.215818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.215945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.216120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.216143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.216332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.216553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.216577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.216797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.216978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.217006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.217200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.217406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.217435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.217652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.217839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.217888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.218097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.218288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.218317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.218483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.218666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.218690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.218868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.219080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.219109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.219292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.219502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.219558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.219765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.219927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.219988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.220145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.220339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.220366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.220538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.220740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.220791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.221003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.221113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.221137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.221346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.221512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.221570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.221741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.221968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.222023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.222250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.222435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.222464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.222620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.222779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.222805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.222997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.223191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.223230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.223408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.223572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.223615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.223793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.223994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.224048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.224192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.224368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.224397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.224569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.224755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.224778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.224907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.225140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.225164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.225372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.225548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.225575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.225778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.225982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.226039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.512 [2024-04-18 13:50:01.226193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.226383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.512 [2024-04-18 13:50:01.226411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.512 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.226579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.226765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.226816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.227021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.227212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.227242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.227405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.227615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.227672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.227866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.228128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.228157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.228392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.228550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.228572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.228775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.228994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.229045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.229199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.229371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.229401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.229576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.229811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.229862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.230046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.230224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.230253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.230433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.230623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.230675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.230868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.231004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.231027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.231204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.231395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.231423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.231607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.231826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.231881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.232049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.232244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.232273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.232482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.232675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.232722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.232890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.233064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.233093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.233278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.233437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.233461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.233648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.233833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.233862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.234044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.234230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.234260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.234456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.234655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.234703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.234885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.235096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.235124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.235276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.235414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.235439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.235652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.235796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.235822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.236007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.236149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.236184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.236385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.236613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.236664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.236889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.237087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.237116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.237334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.237570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.237623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.237823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.238046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.238075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.238241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.238435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.238464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.238602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.238832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.238892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.239131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.239337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.239366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.239650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.239835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.239859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.240048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.240245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.240279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.240491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.240679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.240730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.240959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.241093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.513 [2024-04-18 13:50:01.241132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.513 qpair failed and we were unable to recover it. 00:20:58.513 [2024-04-18 13:50:01.241329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.241562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.241615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.241806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.241989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.242018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.242230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.242412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.242437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.242682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.242829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.242869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.243065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.243210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.243238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.243428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.243598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.243621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.243761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.243949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.243996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.244191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.244328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.244356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.244587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.244826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.244875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.245090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.245232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.245262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.245441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.245662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.245712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.245906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.246068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.246097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.246279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.246422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.246451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.246651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.246820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.246885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.247038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.247319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.247345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.247507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.247682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.247710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.247853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.248017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.248046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.248219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.248408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.248436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.248619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.248812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.248867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.248992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.249239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.249269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.249477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.249605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.249628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.249805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.249943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.249982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.250165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.250344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.250372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.250575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.250776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.250800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.251021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.251240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.251287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.251468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.251665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.251714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.251901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.252077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.252101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.252297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.252526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.252555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.252718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.252869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.252898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.253125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.253269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.253295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.253511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.253744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.253792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.253943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.254131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.254160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.254408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.254595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.254648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.254836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.254984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.255011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.255236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.255391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.255420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.255603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.255871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.255924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.256109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.256283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.256311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.256537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.256751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.256802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.256975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.257093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.257115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.514 qpair failed and we were unable to recover it. 00:20:58.514 [2024-04-18 13:50:01.257292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.257517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.514 [2024-04-18 13:50:01.257567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.257772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.257968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.258020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.258224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.258389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.258412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.258614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.258847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.258897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.259095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.259280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.259310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.259479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.259693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.259730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.259941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.260142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.260170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.260363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.260566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.260616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.260801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.261002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.261024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.261235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.261431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.261456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.261658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.261833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.261891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.262123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.262327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.262357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.262545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.262735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.262799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.262994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.263211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.263241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.263457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.263672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.263742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.263971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.264140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.264169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.264374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.264565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.264616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.264838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.265018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.265041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.265247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.265476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.265517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.265701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.265890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.265946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.266143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.266375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.266403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.266564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.266793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.266845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.267020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.267197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.267228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.267429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.267580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.267604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.267787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.268015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.268070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.268247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.268433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.268462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.268658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.268905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.268954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.269142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.269340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.269369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.269553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.269739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.269793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.269984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.270206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.270236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.270462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.270661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.270719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.270904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.271057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.271086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.271278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.271446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.271476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.271661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.271853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.271908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.272113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.272297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.272327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.272527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.272718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.272741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.272977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.273190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.273218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.273434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.273642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.273694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.273892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.274057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.274085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.274276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.274466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.274494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.515 qpair failed and we were unable to recover it. 00:20:58.515 [2024-04-18 13:50:01.274651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.515 [2024-04-18 13:50:01.274815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.274844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.275038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.275270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.275299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.275457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.275693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.275742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.275930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.276161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.276197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.276395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.276592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.276642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.276832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.277001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.277028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.277228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.277423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.277452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.277638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.277835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.277859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.278106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.278303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.278332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.278530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.278764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.278815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.278990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.279207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.279237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.279433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.279623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.279671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.279860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.280058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.280087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.280292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.280537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.280593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.280818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.280993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.281043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.281243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.281482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.281546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.281742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.281959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.281996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.282202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.282407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.282435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.282634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.282851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.282900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.283142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.283367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.283396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.283588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.283790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.283843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.284038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.284226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.284255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.284460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.284663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.284686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.284919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.285156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.516 [2024-04-18 13:50:01.285192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.516 qpair failed and we were unable to recover it. 00:20:58.516 [2024-04-18 13:50:01.285419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.285598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.285654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.285864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.286069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.286093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.286288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.286485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.286514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.286724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.286905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.286931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.287098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.287270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.287297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.287522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.287768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.287820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.288011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.288204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.288233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.288451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.288646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.288671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.288875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.289098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.289128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.289345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.289500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.289527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.289747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.289965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.289991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.290163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.290361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.290391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.290618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.290821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.290874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.291099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.291271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.291301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.291457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.291659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.291718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.291919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.292126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.292155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.292394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.292621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.292647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.292803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.293009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.293070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.293265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.293474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.293536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.293742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.293902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.293928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.294123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.294341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.294368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.294530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.294699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.294725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.294923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.295162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.295198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.785 qpair failed and we were unable to recover it. 00:20:58.785 [2024-04-18 13:50:01.295412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.785 [2024-04-18 13:50:01.295603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.295629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.295856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.295997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.296023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.296188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.296385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.296413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.296643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.296817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.296869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.297043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.297240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.297270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.297496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.297652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.297678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.297872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.298098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.298125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.298331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.298559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.298585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.298754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.298952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.298977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.299218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.299379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.299405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.299597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.299844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.299883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.300114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.300325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.300352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.300544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.300750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.300805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.301025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.301233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.301262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.301475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.301712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.301738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.301904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.302096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.302122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.302363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.302537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.302577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.302784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.303024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.303079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.303255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.303454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.303482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.303715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.303921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.303972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.304213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.304426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.304452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.304619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.304835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.304861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.305108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.305298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.305329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.305522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.305745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.305770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.305977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.306198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.306224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.306470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.306702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.306748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.306957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.307150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.307183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.786 qpair failed and we were unable to recover it. 00:20:58.786 [2024-04-18 13:50:01.307407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.786 [2024-04-18 13:50:01.307611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.307658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.307896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.308099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.308125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.308375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.308593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.308641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.308845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.309012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.309060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.309317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.309512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.309539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.309792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.310025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.310074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.310237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.310432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.310458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.310699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.310895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.310942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.311241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.311491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.311531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.311755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.312009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.312062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.312318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.312595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.312649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.312870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.313070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.313099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.313282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.313552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.313599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.313824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.313985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.314014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.314170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.314465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.314494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.314713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.314970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.315025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.315297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.315560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.315610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.315827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.316011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.316039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.316258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.316539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.316587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.316843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.317054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.317083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.317343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.317563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.317611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.317763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.318010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.318057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.318312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.318569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.318616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.318823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.319059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.319084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.319372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.319668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.319713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.319931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.320171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.320213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.320515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.320742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.320797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.787 [2024-04-18 13:50:01.321004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.321165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.787 [2024-04-18 13:50:01.321205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.787 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.321490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.321722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.321768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.321973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.322162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.322222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.322458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.322672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.322717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.322922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.323126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.323154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.323415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.323583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.323616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.323797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.324006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.324052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.324257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.324465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.324494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.324712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.324949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.324974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.325199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.325381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.325407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.325625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.325802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.325827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.326025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.326228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.326254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.326452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.326655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.326679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.326896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.327112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.327141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.327359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.327617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.327664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.327905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.328105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.328135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.328310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.328520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.328561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.328759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.328994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.329039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.329260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.329397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.329423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.329650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.329891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.329937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.330142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.330352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.330381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.330612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.330855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.330901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.331143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.331337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.331366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.331574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.331772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.331817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.332062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.332283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.332312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.332504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.332702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.332747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.788 [2024-04-18 13:50:01.332997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.333237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.788 [2024-04-18 13:50:01.333266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.788 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.333469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.333693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.333738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.333987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.334241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.334270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.334455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.334662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.334705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.334980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.335219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.335259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.335461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.335722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.335772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.336007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.336242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.336271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.336422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.336666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.336715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.336962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.337136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.337165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.337379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.337646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.337696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.337952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.338190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.338223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.338401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.338662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.338713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.338932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.339116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.339144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.339376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.339603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.339653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.339907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.340109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.340137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.340404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.340587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.340638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.340886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.341101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.341131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.341397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.341632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.341682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.341885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.342122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.342151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.342417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.342644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.342695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.342944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.343201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.343242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.343441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.343647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.343697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.343925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.344170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.344206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.344395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.344651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.344702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.789 qpair failed and we were unable to recover it. 00:20:58.789 [2024-04-18 13:50:01.344947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.345126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.789 [2024-04-18 13:50:01.345155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.345376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.345638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.345689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.345906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.346155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.346203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.346433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.346659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.346709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.346923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.347086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.347123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.347316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.347523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.347609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.347829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.348100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.348151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.348406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.348643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.348693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.348908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.349086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.349114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.349391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.349569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.349620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.349880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.350109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.350137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.350421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.350666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.350716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.350982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.351166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.351205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.351436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.351605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.351652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.351870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.352137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.352197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.352470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.352679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.352730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.352982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.353185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.353215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.353444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.353602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.353650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.353904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.354074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.354102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.354367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.354641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.790 [2024-04-18 13:50:01.354692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.790 qpair failed and we were unable to recover it. 00:20:58.790 [2024-04-18 13:50:01.354923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.355106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.355135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.355407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.355663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.355713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.355971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.356232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.356262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.356524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.356715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.356765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.357015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.357161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.357207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.357424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.357700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.357749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.357984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.358195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.358225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.358502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.358730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.358780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.359002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.359263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.359293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.359534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.359798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.359849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.360112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.360361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.360390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.360620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.360858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.360909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.361192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.361425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.361454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.361681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.361904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.361955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.362164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.362413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.362442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.362694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.362986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.363036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.363332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.363626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.363676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.363883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.364080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.364109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.364385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.364560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.364611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.364871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.365122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.365152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.365435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.365699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.365748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.366038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.366269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.366299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.366574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.366807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.366858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.367095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.367324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.367354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.367585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.367829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.367880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.368116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.368368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.368397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.791 [2024-04-18 13:50:01.368653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.368876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.791 [2024-04-18 13:50:01.368926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.791 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.369190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.369400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.369431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.369661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.369937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.369990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.370277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.370566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.370596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.370805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.371034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.371085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.371376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.371660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.371711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.371944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.372213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.372243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.372490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.372771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.372819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.373020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.373257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.373287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.373559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.373852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.373903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.374104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.374359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.374389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.374570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.374754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.374813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.375069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.375312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.375342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.375614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.375872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.375926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.376161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.376405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.376434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.376721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.377005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.377056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.377324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.377503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.377555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.377819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.378102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.378151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.378448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.378669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.378721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.378862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.379007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.379035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.379248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.379511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.379581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.379865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.380098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.380127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.380418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.380657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.380708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.380942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.381184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.381218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.381436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.381735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.381785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.382065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.382338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.382367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.382603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.382819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.382870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.792 qpair failed and we were unable to recover it. 00:20:58.792 [2024-04-18 13:50:01.383165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.792 [2024-04-18 13:50:01.383469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.383499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.383776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.383971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.384030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.384284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.384543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.384595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.384879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.385147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.385184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.385505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.385804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.385854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.386142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.386411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.386441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.386678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.387000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.387057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.387305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.387574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.387625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.387908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.388216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.388246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.388548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.388786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.388836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.389079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.389368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.389398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.389658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.389901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.389951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.390215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.390466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.390495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.390748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.391032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.391082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.391395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.391720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.391770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.392064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.392368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.392398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.392653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.392879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.392935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.393204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.393474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.393504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.393791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.394097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.394146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.394396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.394651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.394703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.394990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.395235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.395265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.395501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.395808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.395859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.396151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.396408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.396437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.396719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.396979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.397026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.397295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.397598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.397647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.793 [2024-04-18 13:50:01.397945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.398236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.793 [2024-04-18 13:50:01.398266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.793 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.398534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.398795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.398847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.399095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.399395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.399425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.399707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.399961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.400011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.400261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.400513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.400542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.400829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.401142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.401201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.401496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.401805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.401854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.402152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.402421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.402451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.402740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.402973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.403023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.403326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.403632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.403682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.403966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.404215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.404245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.404443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.404748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.404798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.405101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.405405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.405435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.405647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.405911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.405961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.406236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.406498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.406527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.406776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.407083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.407134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.407388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.407635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.407683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.407984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.408187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.408217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.408504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.408804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.408852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.409138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.409442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.409471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.409747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.410045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.410093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.410346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.410527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.410570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.410836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.411081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.411128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.411402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.411645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.411693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.411907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.412174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.412211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.412468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.412717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.412758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.413024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.413270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.413300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.794 qpair failed and we were unable to recover it. 00:20:58.794 [2024-04-18 13:50:01.413553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.413832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.794 [2024-04-18 13:50:01.413887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.414150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.414422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.414448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.414698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.414963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.415012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.415303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.415554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.415583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.415841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.416048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.416102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.416405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.416629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.416682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.416963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.417191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.417221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.417495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.417786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.417811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.418064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.418285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.418315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.418597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.418832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.418884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.419152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.419427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.419454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.419732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.420006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.420031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.420296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.420561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.420616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.420916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.421214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.421241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.421437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.421683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.421708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.421945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.422205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.422231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.422466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.422721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.422746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.422936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.423241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.423272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.423539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.423824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.423874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.424164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.424409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.424434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.424706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.424902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.424927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.425189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.425428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.425457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.795 qpair failed and we were unable to recover it. 00:20:58.795 [2024-04-18 13:50:01.425730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.425951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.795 [2024-04-18 13:50:01.426010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.426282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.426533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.426562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.426805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.427012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.427061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.427325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.427540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.427594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.427768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.427992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.428045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.428308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.428577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.428628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.428913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.429132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.429172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.429432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.429750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.429801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.430088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.430347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.430377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.430678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.430969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.430994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.431262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.431516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.431545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.431791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.432034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.432085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.432349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.432599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.432624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.432865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.433107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.433136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.433406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.433637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.433689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.433938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.434203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.434233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.434487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.434746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.434796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.435092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.435310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.435340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.435586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.435840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.435891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.436134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.436410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.436439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.436654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.436910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.436958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.437258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.437559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.437588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.437885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.438150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.438209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.438511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.438776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.438828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.439069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.439305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.439336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.439633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.439845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.439897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.796 qpair failed and we were unable to recover it. 00:20:58.796 [2024-04-18 13:50:01.440186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.440434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.796 [2024-04-18 13:50:01.440463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.440700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.440994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.441045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.441300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.441507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.441536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.441801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.442074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.442129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.442462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.442717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.442768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.443018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.443205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.443235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.443505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.443770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.443824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.444109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.444410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.444440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.444729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.444991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.445040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.445340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.445651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.445700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.445986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.446237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.446267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.446515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.446790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.446841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.447010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.447224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.447265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.447549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.447852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.447904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.448131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.448325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.448355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.448643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.448947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.448997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.449282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.449542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.449571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.449887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.450164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.450223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.450520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.450780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.450829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.451116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.451390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.451420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.451722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.452009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.452057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.452380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.452683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.452733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.452985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.453266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.453295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.453474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.453686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.453734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.453963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.454254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.454285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.454569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.454825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.454875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.455121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.455405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.455436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.455669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.455927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.455976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.797 qpair failed and we were unable to recover it. 00:20:58.797 [2024-04-18 13:50:01.456277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.797 [2024-04-18 13:50:01.456546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.456575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.456870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.457188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.457233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.457528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.457825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.457876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.458172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.458468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.458497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.458743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.459088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.459139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.459444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.459712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.459762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.460034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.460320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.460350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.460605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.460832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.460882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.461117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.461367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.461397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.461695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.462016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.462065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.462351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.462617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.462667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.462915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.463203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.463233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.463512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.463762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.463812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.464057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.464336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.464366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.464657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.464959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.465010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.465268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.465490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.465519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.465803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.466064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.466113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.466418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.466730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.466780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.467037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.467225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.467276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.467541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.467839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.467897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.468151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.468403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.468433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.468735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.469001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.469048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.469310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.469615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.469663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.469964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.470256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.470285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.470591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.470915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.470964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.798 qpair failed and we were unable to recover it. 00:20:58.798 [2024-04-18 13:50:01.471248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.798 [2024-04-18 13:50:01.471546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.471575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.471882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.472207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.472259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.472526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.472820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.472869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.473159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.473461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.473490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.473784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.474049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.474101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.474392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.474700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.474749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.475043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.475300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.475329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.475619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.475928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.475978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.476224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.476501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.476530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.476773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.476985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.477041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.477299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.477556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.477606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.477844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.478118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.478168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.478479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.478769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.478820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.479127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.479459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.479489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.479735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.479957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.480012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.480307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.480600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.480630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.480923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.481172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.481209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.481528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.481842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.481891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.482168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.482472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.482501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.482800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.483067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.483117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.483413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.483724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.483774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.484076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.484312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.484343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.484578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.484873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.484920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.485213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.485467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.485496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.485783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.485993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.486047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.486337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.486643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.486693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.486981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.487267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.487296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.799 qpair failed and we were unable to recover it. 00:20:58.799 [2024-04-18 13:50:01.487545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.487823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.799 [2024-04-18 13:50:01.487871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.488128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.488432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.488461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.488706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.488982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.489034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.489252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.489511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.489569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.489823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.490083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.490134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.490433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.490652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.490702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.490977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.491256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.491286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.491579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.491827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.491878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.492139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.492433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.492463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.492710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.493008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.493057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.493349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.493663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.493713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.494007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.494255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.494284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.494585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.494847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.494897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.495192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.495484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.495513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.495818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.496119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.496168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.496435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.496707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.496754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.497031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.497252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.497282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.497593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.497899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.497950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.498238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.498518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.498547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.498827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.499143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.800 [2024-04-18 13:50:01.499201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.800 qpair failed and we were unable to recover it. 00:20:58.800 [2024-04-18 13:50:01.499421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.499648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.499697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.499994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.500231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.500261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.500536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.500833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.500885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.501154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.501450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.501480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.501781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.502086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.502136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.502439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.502755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.502804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.503000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.503277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.503307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.503564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.503813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.503861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.504154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.504454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.504483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.504782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.504971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.505022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.505264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.505516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.505572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.505847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.506151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.506209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.506471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.506777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.506825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.507112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.507368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.507398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.507678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.507975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.508025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.508271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.508529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.508582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.508880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.509165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.509202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.509496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.509731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.509781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.510029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.510316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.510346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.510611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.510863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.510916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.511156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.511335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.511364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.511662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.511972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.512020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.512268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.512564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.512616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.512872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.513188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.513233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.513526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.513752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.513802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.514096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.514347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.514377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.801 qpair failed and we were unable to recover it. 00:20:58.801 [2024-04-18 13:50:01.514624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.801 [2024-04-18 13:50:01.514877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.514927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.515170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.515472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.515501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.515817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.516127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.516187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.516449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.516696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.516745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.517024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.517297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.517328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.517524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.517813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.517865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.518160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.518414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.518443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.518739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.519005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.519055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.519270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.519569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.519619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.519906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.520402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.520431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.520702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.520990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.521038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.521338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.521565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.521614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.521914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.522207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.522237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.522640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.522973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.523023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.523318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.523560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.523613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.523893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.524143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.524172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.524446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.524709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.524757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.525004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.525227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.525257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.525434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.525596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.525657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.525836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.526056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.526113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.526309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.526474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.526503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.526651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.526867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.526927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.527099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.527294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.527323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.527518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.527650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.527677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.527824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.527968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.527997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.528173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.528399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.528427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.528635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.528790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.528853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.529034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.529249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.802 [2024-04-18 13:50:01.529278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.802 qpair failed and we were unable to recover it. 00:20:58.802 [2024-04-18 13:50:01.529490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.529630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.529652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.529849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.529997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.530025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.530232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.530397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.530424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.530598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.530742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.530782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.530956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.531151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.531187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.531386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.531564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.531618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.531788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.531947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.531989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.532131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.532318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.532347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.532486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.532649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.532678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.532857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.533037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.533064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.533232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.533430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.533458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.533633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.533770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.533797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.533959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.534089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.534112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.534300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.534465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.534493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.534663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.534792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.534819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.535041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.535342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.535390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.535630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.535881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.535932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.536220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.536488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.536537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.536833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.537145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.537208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.537480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.537771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.537821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.538076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.538347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.538377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.538640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.538863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.538911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.539197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.539448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.539478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.539773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.539994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.540052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.540259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.540521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.540579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.540872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.541068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.541096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.541259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.541494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.541549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.803 qpair failed and we were unable to recover it. 00:20:58.803 [2024-04-18 13:50:01.541842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.542142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.803 [2024-04-18 13:50:01.542171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.542469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.542747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.542795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.543036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.543332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.543362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.543627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.543826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.543875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.544134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.544381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.544417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.544711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.545009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.545061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.545320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.545552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.545601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.545859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.546117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.546145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.546321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.546580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.546629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.546863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.547119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.547168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.547434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.547737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.547787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.548076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.548357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.548387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.548650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.548907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.548958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.549248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.549535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.549565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.549854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.550122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.550172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.550432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.550653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.550703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.550868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.551119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.551169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.551477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.551792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.551841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.552033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.552286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.552339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.552572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.552829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.552879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.553166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.553437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.553466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.553737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.553937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.553989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.554275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.554561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.554591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.554885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.555195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.555243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.555488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.555791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.555842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.556129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.556365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.804 [2024-04-18 13:50:01.556406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.804 qpair failed and we were unable to recover it. 00:20:58.804 [2024-04-18 13:50:01.556671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.556856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.556907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.557163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.557409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.557443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.557627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.557919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.557970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.558197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.558438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.558467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.558725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.559015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.559065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.559355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.559593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.559643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.559918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.560205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.560235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.560546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.560856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.560904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.561203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.561426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.561455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.561741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.561991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.562041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.562327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.562627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.562677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.562925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.563167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.563209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.563467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.563764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.563815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.564024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.564297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.564327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.564619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.564817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.564867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.565112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.565372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.565402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.565657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.565873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.565923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.566156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.566418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.566447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.566691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.566998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.567050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.567338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.567656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.567707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.567960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.568192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.568222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.805 qpair failed and we were unable to recover it. 00:20:58.805 [2024-04-18 13:50:01.568465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.568741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.805 [2024-04-18 13:50:01.568796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.569087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.569354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.569384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.569640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.569891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.569939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.570201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.570431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.570461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.570747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.570981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.571032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.571317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.571616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.571667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.571915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.572199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.572229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.572519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.572842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.572893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.573171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.573402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.573432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.573728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.573963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.574013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.574272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.574560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.574593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.574889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.575150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.575223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.575489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.575753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.575805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.576109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.576372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.576402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.576701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.576959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.577011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.577311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.577608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.577637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.577855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.578094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.578145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.578443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.578762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.578811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.579111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.579368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.579397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.579657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.579965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.580015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.580255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.580509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.580539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.580842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.581143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.806 [2024-04-18 13:50:01.581200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:58.806 qpair failed and we were unable to recover it. 00:20:58.806 [2024-04-18 13:50:01.581524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.581767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.581817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.076 qpair failed and we were unable to recover it. 00:20:59.076 [2024-04-18 13:50:01.582128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.582434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.582464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.076 qpair failed and we were unable to recover it. 00:20:59.076 [2024-04-18 13:50:01.582775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.583084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.583142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.076 qpair failed and we were unable to recover it. 00:20:59.076 [2024-04-18 13:50:01.583449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.583756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.583806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.076 qpair failed and we were unable to recover it. 00:20:59.076 [2024-04-18 13:50:01.584111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.584332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.584374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.076 qpair failed and we were unable to recover it. 00:20:59.076 [2024-04-18 13:50:01.584637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.584933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.584982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.076 qpair failed and we were unable to recover it. 00:20:59.076 [2024-04-18 13:50:01.585276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.585580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.585610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.076 qpair failed and we were unable to recover it. 00:20:59.076 [2024-04-18 13:50:01.585895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.586148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.586204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.076 qpair failed and we were unable to recover it. 00:20:59.076 [2024-04-18 13:50:01.586467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.586718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.586768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.076 qpair failed and we were unable to recover it. 00:20:59.076 [2024-04-18 13:50:01.587014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.076 [2024-04-18 13:50:01.587309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.587339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.587581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.587881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.587929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.588185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.588469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.588498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.588743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.589002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.589051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.589355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.589622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.589671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.589905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.590102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.590131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.590362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.590623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.590673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.590945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.591248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.591277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.591581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.591825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.591873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.592118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.592375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.592405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.592713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.592967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.593018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.593310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.593530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.593559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.593739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.594023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.594071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.594359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.594620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.594668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.594957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.595204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.595234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.595496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.595793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.595844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.596091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.596342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.596372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.596621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.596871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.596921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.597163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.597427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.597457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.597739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.597991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.598041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.598344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.598606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.598655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.598943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.599234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.599264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.599553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.599855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.599909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.600163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.600426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.600456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.600748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.600990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.601038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.601337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.601614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.601666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.601955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.602217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.602248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.602434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.602687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.602741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.077 [2024-04-18 13:50:01.603022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.603272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.077 [2024-04-18 13:50:01.603302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.077 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.603508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.603764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.603814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.604106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.604412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.604442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.604650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.604926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.604974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.605218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.605476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.605506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.605754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.606002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.606050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.606292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.606593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.606645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.606938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.607223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.607253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.607515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.607731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.607780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.608059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.608352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.608382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.608639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.608946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.608995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.609249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.609529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.609559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.609849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.610099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.610148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.610417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.610622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.610673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.610907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.611123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.611152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.611455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.611761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.611813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.612115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.612408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.612439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.612685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.612899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.612947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.613133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.613433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.613463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.613721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.613972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.614023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.614265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.614578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.614632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.614897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.615136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.615165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.615468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.615769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.615818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.616120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.616358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.616389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.616679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.616919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.616969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.617211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.617453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.617482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.617722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.618015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.618064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.618300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.618570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.618619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.618871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.619162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.078 [2024-04-18 13:50:01.619199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.078 qpair failed and we were unable to recover it. 00:20:59.078 [2024-04-18 13:50:01.619486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.619791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.619842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.620151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.620420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.620450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.620686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.620990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.621039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.621337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.621592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.621641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.621890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.622132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.622162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.622444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.622749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.622798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.623058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.623300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.623329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.623615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.623924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.623974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.624271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.624522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.624551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.624842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.625114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.625164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.625431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.625680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.625735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.625922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.626219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.626249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.626541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.626798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.626846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.627130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.627427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.627458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.627728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.627986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.628034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.628323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.628592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.628643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.628890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.629130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.629158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.629464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.629702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.629756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.630044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.630334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.630364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.630650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.630907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.630957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.631220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.631485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.631515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.631813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.632084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.632136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.632440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.632712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.632761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.633048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.633281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.633311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.633602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.633869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.633917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.634209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.634503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.634533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.634821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.635087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.635135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.635440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.635680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.635728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.079 qpair failed and we were unable to recover it. 00:20:59.079 [2024-04-18 13:50:01.635976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.079 [2024-04-18 13:50:01.636222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.636252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.636503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.636765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.636814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.637109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.637407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.637437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.637744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.638001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.638052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.638345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.638609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.638659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.638915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.639211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.639241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.639539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.639806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.639853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.640141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.640446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.640476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.640737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.641033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.641083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.641341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.641648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.641697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.641989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.642268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.642299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.642591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.642844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.642893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.643146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.643437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.643466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.643729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.643984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.644035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.644303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.644581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.644632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.644890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.645131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.645165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.645471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.645724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.645773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.646060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.646323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.646353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.646649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.646861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.646909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.647208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.647481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.647510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.647801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.648099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.648149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.648450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.648716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.648763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.649047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.649250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.649280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.649536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.649836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.649889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.650194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.650449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.650479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.650726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.650931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.650986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.651279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.651540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.651570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.651857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.652094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.652142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.080 qpair failed and we were unable to recover it. 00:20:59.080 [2024-04-18 13:50:01.652451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.080 [2024-04-18 13:50:01.652727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.652778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.653075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.653370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.653400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.653672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.653964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.654013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.654268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.654562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.654592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.654839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.655137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.655195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.655509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.655774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.655823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.656112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.656362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.656392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.656688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.656951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.657003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.657253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.657550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.657579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.657827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.658121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.658169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.658470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.658780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.658831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.659117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.659360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.659390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.659652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.659908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.659955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.660175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.660455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.660484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.660726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.660941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.660992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.661236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.661496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.661525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.661799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.662081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.662130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.662385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.662634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.662690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.662943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.663238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.663268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.663512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.663812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.663861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.664151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.664411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.664440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.664731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.664937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.664986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.665222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.665464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.665504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.081 qpair failed and we were unable to recover it. 00:20:59.081 [2024-04-18 13:50:01.665805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.081 [2024-04-18 13:50:01.666102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.666153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.666475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.666754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.666805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.667041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.667293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.667323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.667613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.667831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.667882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.668173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.668396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.668426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.668724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.668998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.669049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.669264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.669507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.669536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.669745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.669956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.670008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.670278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.670532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.670557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.670813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.671075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.671100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.671353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.671603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.671654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.671948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.672202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.672232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.672523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.672828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.672876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.673172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.673431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.673460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.673701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.673989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.674041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.674298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.674608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.674666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.674960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.675214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.675244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.675543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.675862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.675913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.676164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.676405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.676435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.676695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.676961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.677011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.677283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.677558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.677622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.677909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.678196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.678227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.678474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.678697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.678748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.678959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.679245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.679274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.679567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.679835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.679886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.680150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.680456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.680487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.680797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.680997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.681022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.681323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.681590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.681616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.082 qpair failed and we were unable to recover it. 00:20:59.082 [2024-04-18 13:50:01.681815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.082 [2024-04-18 13:50:01.682065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.682089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.682320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.682577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.682602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.682903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.683208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.683238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.683480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.683791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.683843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.684167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.684422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.684452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.684708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.684944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.684993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.685249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.685543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.685572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.685874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.686127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.686151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.686460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.686678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.686742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.687028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.687271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.687301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.687593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.687868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.687893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.688133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.688439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.688469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.688709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.689003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.689055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.689368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.689664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.689689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.689930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.690132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.690159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.690389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.690646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.690695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.690969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.691219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.691246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.691465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.691735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.691760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.692031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.692231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.692261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.692431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.692692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.692730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.692979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.693152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.693181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.693324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.693481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.693521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.693760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.693999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.694025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.694266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.694427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.694469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.694633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.694775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.694804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.694993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.695195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.695225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.695384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.695557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.695586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.695758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.695959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.696015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.083 qpair failed and we were unable to recover it. 00:20:59.083 [2024-04-18 13:50:01.696213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.083 [2024-04-18 13:50:01.696398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.696425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.696637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.696819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.696870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.697014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.697194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.697242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.697404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.697654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.697712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.697876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.698036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.698064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.698229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.698396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.698424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.698636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.698826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.698879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.699010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.699205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.699234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.699389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.699570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.699630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.699801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.699968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.699992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.700173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.700344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.700373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.700530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.700710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.700778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.700959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.701128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.701152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.701313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.701477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.701517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.701786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.701981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.702006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.702226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.702428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.702454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.702690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.702880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.702919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.703098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.703300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.703326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.703462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.703665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.703690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.703850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.704011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.704036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.704228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.704409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.704435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.704643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.704854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.704878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.705053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.705255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.705282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.705474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.705671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.705696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.705874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.706046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.706071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.706290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.706479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.706504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.706652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.706834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.706897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.707100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.707287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.084 [2024-04-18 13:50:01.707314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.084 qpair failed and we were unable to recover it. 00:20:59.084 [2024-04-18 13:50:01.707497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.707683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.707708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.707873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.708023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.708048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.708275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.708436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.708469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.708618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.708834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.708859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.709114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.709277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.709304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.709437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.709591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.709632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.709786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.709929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.709955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.710140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.710327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.710353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.710586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.710778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.710803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.711025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.711189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.711239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.711434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.711617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.711642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.711824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.712027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.712053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.712239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.712401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.712427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.712695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.712926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.712963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.713256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.713426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.713452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.713603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.713777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.713802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.714109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.714269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.714296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.714511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.714720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.714745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.714931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.715122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.715151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.715343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.715543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.715569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.715800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.716018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.716077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.716291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.716429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.716475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.716680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.716921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.716945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.717202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.717353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.085 [2024-04-18 13:50:01.717381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.085 qpair failed and we were unable to recover it. 00:20:59.085 [2024-04-18 13:50:01.717605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.717822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.717866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.718107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.718294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.718323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.718493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.718695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.718743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.719026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.719262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.719292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.719470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.719627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.719651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.719917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.720194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.720230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.720422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.720573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.720605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.720908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.721140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.721169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.721357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.721540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.721576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.721823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.722121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.722170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.722336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.722543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.722582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.722814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.722984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.723033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.723204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.723356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.723384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.723644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.723885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.723909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.724097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.724296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.724340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.724572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.724804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.724849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.725133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.725328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.725355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.725499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.725627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.725657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.725888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.726115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.726144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.726307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.726500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.726529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.726778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.727039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.727075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.086 qpair failed and we were unable to recover it. 00:20:59.086 [2024-04-18 13:50:01.727247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.086 [2024-04-18 13:50:01.727390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.727417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.727565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.727713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.727739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.727923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.728190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.728217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.728378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.728552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.728581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.728754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.728893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.728918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.729095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.729310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.729339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.729567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.729824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.729857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.730077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.730260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.730286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.730454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.730607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.730632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.730864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.731062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.731091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.731288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.731440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.731466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.731646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.731802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.731836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.732040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.732295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.732324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.732489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.732721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.732767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.733029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.733269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.733296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.733492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.733712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.733741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.733967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.734121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.734163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.734390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.734609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.734635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.734880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.735153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.735191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.735390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.735559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.735603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.735855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.736024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.736067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.736233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.736389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.736414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.736602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.736794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.736838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.737027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.737224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.737265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.737432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.737574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.737602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.737742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.738005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.738033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.738205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.738361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.738390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.087 [2024-04-18 13:50:01.738565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.738716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.087 [2024-04-18 13:50:01.738780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.087 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.739012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.739283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.739310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.739453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.739623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.739652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.739842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.740015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.740043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.740228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.740387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.740413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.740636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.740883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.740931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.741064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.741260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.741287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.741437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.741624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.741681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.741851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.742051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.742079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.742270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.742412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.742440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.742628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.742821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.742871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.743011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.743193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.743222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.743372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.743552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.743580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.743749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.743908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.743950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.744138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.744415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.744442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.744652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.744829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.744879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.745071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.745202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.745229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.745394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.745587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.745647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.745823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.746023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.746051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.746220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.746358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.746385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.746587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.746772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.746830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.747004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.747142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.747170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.747339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.747514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.747575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.747771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.747949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.747977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.748140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.748300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.748326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.748498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.748624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.748666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.748848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.749040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.749068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.749227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.749390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.749415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.749574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.749769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.749819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.088 qpair failed and we were unable to recover it. 00:20:59.088 [2024-04-18 13:50:01.749989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.088 [2024-04-18 13:50:01.750144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.750172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.750362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.750495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.750519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.750694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.750822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.750844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.751019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.751205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.751248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.751416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.751619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.751680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.751880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.752037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.752059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.752235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.752403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.752429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.752621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.752791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.752820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.753009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.753145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.753196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.753372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.753551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.753589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.753792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.753967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.754016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.754206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.754373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.754399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.754570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.754736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.754765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.754896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.755032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.755060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.755247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.755390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.755414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.755628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.755793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.755821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.756020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.756192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.756236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.756373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.756492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.756515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.756689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.756878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.756939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.757069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.757244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.757271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.757411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.757557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.757580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.757800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.757992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.758020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.758192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.758348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.758373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.758517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.758663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.758704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.758943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.759139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.759167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.759356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.759535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.759590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.759757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.759913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.759953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.760149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.760328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.760357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.089 [2024-04-18 13:50:01.760498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.760684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.089 [2024-04-18 13:50:01.760749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.089 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.760918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.761057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.761095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.761256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.761402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.761431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.761569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.761706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.761734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.761889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.762044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.762083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.762230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.762372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.762400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.762566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.762726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.762791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.762994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.763150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.763172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.763337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.763500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.763528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.763730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.763886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.763948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.764116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.764269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.764312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.764461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.764596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.764625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.764822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.764985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.765014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.765189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.765349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.765391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.765522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.765691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.765720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.765891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.766074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.766102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.766301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.766452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.766480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.766650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.766859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.766912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.767110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.767291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.767320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.767480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.767677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.767699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.767885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.768027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.768055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.768253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.768385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.768414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.768544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.768691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.768714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.768895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.769090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.769118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.090 qpair failed and we were unable to recover it. 00:20:59.090 [2024-04-18 13:50:01.769273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.769413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.090 [2024-04-18 13:50:01.769438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.769622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.769911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.769961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.770163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.770344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.770372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.770533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.770675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.770703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.770872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.771005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.771027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.771222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.771393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.771421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.771561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.771769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.771823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.771994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.772126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.772149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.772339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.772529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.772594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.772819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.773106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.773135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.773298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.773481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.773505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.773692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.773905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.773955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.774089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.774249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.774278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.774426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.774772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.774800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.774971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.775135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.775163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.775342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.775533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.775590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.775746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.775892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.775915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.776111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.776281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.776309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.776454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.776590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.776618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.776759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.776915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.776938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.777080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.777277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.777307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.777501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.777756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.777796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.777949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.778144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.778172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.778341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.778475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.778504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.778675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.778863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.778925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.779070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.779289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.779318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.779466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.779660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.779688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.779820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.779956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.779983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.091 qpair failed and we were unable to recover it. 00:20:59.091 [2024-04-18 13:50:01.780206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.780345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.091 [2024-04-18 13:50:01.780374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.780547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.780741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.780799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.780967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.781139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.781167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.781316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.781519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.781542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.781719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.781903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.781967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.782137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.782287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.782316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.782481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.782686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.782742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.782903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.783041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.783070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.783233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.783377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.783405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.783587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.783713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.783751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.783948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.784090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.784118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.784270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.784402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.784426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.784587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.784766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.784827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.784991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.785158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.785193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.785352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.785511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.785539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.785713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.785865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.785907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.786111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.786253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.786282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.786434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.786619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.786680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.786888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.787082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.787111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.787263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.787424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.787451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.787605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.787776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.787804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.787941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.788094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.788120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.788310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.788454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.788482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.788648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.788839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.788900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.789067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.789248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.789278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.789449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.789633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.789697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.789876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.790046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.790074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.790229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.790372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.790396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.092 [2024-04-18 13:50:01.790570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.790737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.092 [2024-04-18 13:50:01.790765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.092 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.790956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.791093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.791122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.791299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.791425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.791449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.791649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.791849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.791903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.792156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.792331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.792360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.792567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.792738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.792791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.792960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.793123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.793152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.793332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.793484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.793507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.793720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.793888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.793938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.794102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.794277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.794317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.794480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.794661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.794716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.794869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.795066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.795095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.795248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.795422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.795450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.795612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.795781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.795813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.795966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.796130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.796168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.796321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.796513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.796541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.796707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.796899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.796962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.797131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.797304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.797333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.797471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.797686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.797755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.797918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.798042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.798070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.798230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.798383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.798424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.798594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.798723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.798751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.798959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.799146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.799174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.799348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.799560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.799623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.799794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.799933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.799961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.800134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.800287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.093 [2024-04-18 13:50:01.800316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.093 qpair failed and we were unable to recover it. 00:20:59.093 [2024-04-18 13:50:01.800457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.800615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.800639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.800865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.801058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.801086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.801294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.801426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.801454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.801598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.801756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.801796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.802004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.802187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.802216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.802355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.802512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.802540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.802701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.802907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.802964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.803108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.803252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.803281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.803463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.803624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.803685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.803858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.804044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.804072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.804205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.804368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.804397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.804533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.804719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.804785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.804956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.805111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.805152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.805332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.805521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.805578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.805756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.805973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.806020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.806186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.806318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.806341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.806493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.806693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.806722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.806945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.807198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.807227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.807406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.807625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.807687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.807887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.808131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.808159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.808318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.808454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.808483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.808723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.808933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.808985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.809151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.809314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.809345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.809491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.809683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.809748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.809964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.810134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.810163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.810317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.810467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.810496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.810666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.810855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.810914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.811090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.811262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.811292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.094 qpair failed and we were unable to recover it. 00:20:59.094 [2024-04-18 13:50:01.811435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.094 [2024-04-18 13:50:01.811580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.811608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.811741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.811885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.811913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.812100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.812242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.812284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.812431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.812634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.812662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.812851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.813002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.813030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.813231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.813393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.813422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.813561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.813763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.813791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.813960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.814167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.814204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.814381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.814566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.814623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.814821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.815039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.815067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.815265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.815401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.815430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.815629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.815802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.815824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.816007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.816174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.816211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.816334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.816509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.816538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.816683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.816885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.816953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.817122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.817284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.817313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.817484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.817613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.817641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.817819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.818022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.818050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.818263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.818434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.818463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.818611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.818774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.818803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.819029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.819214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.819243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.819384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.819549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.819584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.819743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.819949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.820007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.820237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.820406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.820435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.820648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.820856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.095 [2024-04-18 13:50:01.820908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.095 qpair failed and we were unable to recover it. 00:20:59.095 [2024-04-18 13:50:01.821081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.821271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.821300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.821463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.821707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.821755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.821933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.822106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.822135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.822324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.822547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.822596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.822774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.822973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.823026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.823230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.823372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.823400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.823593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.823755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.823807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.824018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.824263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.824292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.824426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.824586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.824615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.824815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.824986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.825014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.825268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.825401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.825429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.825621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.825757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.825784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.825977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.826168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.826224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.826367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.826530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.826567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.826806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.827007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.827054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.827201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.827339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.827367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.827530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.827650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.827673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.827885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.828027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.828055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.828185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.828358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.828386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.828590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.828784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.828812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.828982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.829111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.829139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.829310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.829464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.829501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.829646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.829848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.829905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.830106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.830299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.830328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.830518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.830731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.830779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.096 qpair failed and we were unable to recover it. 00:20:59.096 [2024-04-18 13:50:01.830957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.831126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.096 [2024-04-18 13:50:01.831154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.831311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.831459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.831499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.831694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.831873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.831927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.832096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.832323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.832353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.832539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.832723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.832781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.832948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.833187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.833227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.833373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.833563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.833586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.833780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.833971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.834029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.834166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.834334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.834362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.834570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.834785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.834834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.835004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.835192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.835230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.835372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.835556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.835584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.835725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.835911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.835953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.836129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.836316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.836377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.836550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.836742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.836802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.837011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.837245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.837307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.837527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.837715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.837766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.837963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.838159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.838194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.838359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.838549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.838572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.838798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.839027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.839077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.839313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.839481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.839545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.839765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.840032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.840082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.840324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.840466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.840499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.840627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.840828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.840857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.841084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.841276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.841333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.841517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.841700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.841762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.841983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.842232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.842263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.842413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.842591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.842655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.097 qpair failed and we were unable to recover it. 00:20:59.097 [2024-04-18 13:50:01.842921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.843162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.097 [2024-04-18 13:50:01.843199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.843336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.843558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.843613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.843778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.843994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.844036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.844242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.844376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.844404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.844578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.844706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.844734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.844934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.845138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.845167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.845316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.845472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.845500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.845707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.845860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.845923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.846152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.846329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.846358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.846536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.846729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.846786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.847037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.847257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.847287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.847444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.847607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.847643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.847782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.847995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.848028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.848249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.848393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.848422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.848663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.848876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.848905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.849082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.849274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.849303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.849442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.849637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.849699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.849838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.849986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.850009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.850205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.850374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.850403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.850586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.850717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.850745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.850915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.851122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.851150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.851307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.851531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.851589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.851797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.851978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.852037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.852219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.852343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.852367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.852554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.852739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.852796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.852962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.853128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.853156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.853394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.853576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.853645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.853819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.853989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.854017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.854161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.854324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.098 [2024-04-18 13:50:01.854353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.098 qpair failed and we were unable to recover it. 00:20:59.098 [2024-04-18 13:50:01.854521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.854677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.854713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.854917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.855100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.855128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.855333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.855515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.855578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.855764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.855948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.856010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.856173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.856348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.856377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.856594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.856793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.856847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.856991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.857128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.857151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.857370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.857563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.857624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.857814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.857992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.858050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.858227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.858361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.858401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.858615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.858868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.858917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.859170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.859318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.859353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.859528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.859763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.859811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.860021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.860256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.860289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.860529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.860792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.860839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.861063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.861250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.861274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.861510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.861716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.861770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.861938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.862154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.862201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.862350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.862584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.862645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.862906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.863096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.863125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.863285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.863440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.863477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.863710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.863877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.863934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.864167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.864361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.864389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.864598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.864783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.864844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.865077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.865294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.865324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.865500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.865713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.865763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.865923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.866072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.866100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.866286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.866489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.866517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.099 qpair failed and we were unable to recover it. 00:20:59.099 [2024-04-18 13:50:01.866735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.099 [2024-04-18 13:50:01.866911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.866960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.100 qpair failed and we were unable to recover it. 00:20:59.100 [2024-04-18 13:50:01.867130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.867336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.867365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.100 qpair failed and we were unable to recover it. 00:20:59.100 [2024-04-18 13:50:01.867613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.867760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.867818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.100 qpair failed and we were unable to recover it. 00:20:59.100 [2024-04-18 13:50:01.868014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.868189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.868219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.100 qpair failed and we were unable to recover it. 00:20:59.100 [2024-04-18 13:50:01.868431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.868685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.868736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.100 qpair failed and we were unable to recover it. 00:20:59.100 [2024-04-18 13:50:01.868959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.869086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.869114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.100 qpair failed and we were unable to recover it. 00:20:59.100 [2024-04-18 13:50:01.869301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.869469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.869500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.100 qpair failed and we were unable to recover it. 00:20:59.100 [2024-04-18 13:50:01.869662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.869900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.869954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.100 qpair failed and we were unable to recover it. 00:20:59.100 [2024-04-18 13:50:01.870134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.870321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.870351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.100 qpair failed and we were unable to recover it. 00:20:59.100 [2024-04-18 13:50:01.870595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.870780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.870829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.100 qpair failed and we were unable to recover it. 00:20:59.100 [2024-04-18 13:50:01.870967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.871133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.100 [2024-04-18 13:50:01.871162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.100 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.871474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.871688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.871737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.871941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.872225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.872254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.872424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.872600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.872656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.872817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.872999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.873086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.873355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.873591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.873641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.873864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.874089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.874118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.874356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.874616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.874677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.874919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.875190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.875219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.875378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.875581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.875638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.875901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.876110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.876139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.876377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.876595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.876647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.876858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.877105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.877155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.877402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.877618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.877674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.877837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.878000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.878028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.878199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.878392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.878452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.878754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.878951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.879002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.879191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.879408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.879437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.879636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.879829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.879880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.880124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.880288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.880328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.880544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.880751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.880807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.881044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.881215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.881244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.370 qpair failed and we were unable to recover it. 00:20:59.370 [2024-04-18 13:50:01.881446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.881584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.370 [2024-04-18 13:50:01.881645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.881906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.882194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.882224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.882505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.882790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.882838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.883116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.883351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.883380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.883613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.883874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.883923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.884108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.884266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.884307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.884548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.884693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.884744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.884993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.885230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.885259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.885491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.885722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.885771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.885966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.886129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.886157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.886411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.886653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.886707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.886908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.887075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.887103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.887302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.887596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.887646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.887874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.888004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.888040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.888278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.888470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.888537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.888809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.889009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.889059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.889319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.889536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.889587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.889862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.890104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.890132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.890379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.890595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.890646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.890917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.891147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.891184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.891415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.891623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.891671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.891886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.892075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.892103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.892365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.892646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.892693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.892957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.893114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.893142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.893370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.893562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.893623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.893825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.894057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.894107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.894330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.894539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.894589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.894810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.895019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.895070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.895306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.895595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.371 [2024-04-18 13:50:01.895643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.371 qpair failed and we were unable to recover it. 00:20:59.371 [2024-04-18 13:50:01.895921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.896164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.896201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.896464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.896687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.896736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.896985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.897151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.897187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.897436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.897681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.897732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.897952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.898108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.898136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.898349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.898555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.898605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.898844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.899058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.899108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.899361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.899532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.899560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.899772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.900040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.900088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.900383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.900666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.900715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.901010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.901264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.901293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.901535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.901826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.901876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.902107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.902344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.902373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.902613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.902834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.902884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.903109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.903385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.903415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.903661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.903911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.903959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.904209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.904371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.904400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.904672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.904909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.904960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.905130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.905305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.905345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.905595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.905834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.905884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.906208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.906474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.906503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.906684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.906903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.906956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.907231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.907488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.907517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.907757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.908016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.908066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.908329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.908534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.908589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.908796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.909019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.909070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.909336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.909586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.909636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.909924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.910190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.910217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.372 qpair failed and we were unable to recover it. 00:20:59.372 [2024-04-18 13:50:01.910435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.910641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.372 [2024-04-18 13:50:01.910689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.910939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.911185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.911215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.911472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.911728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.911776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.912078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.912250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.912279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.912555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.912896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.912954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.913186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.913390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.913420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.913672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.913941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.913992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.914245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.914527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.914556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.914797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.915027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.915077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.915372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.915602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.915651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.915875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.916013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.916042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.916232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.916500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.916551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.916858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.917163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.917199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.917499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.917750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.917775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.918002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.918266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.918291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.918540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.918812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.918836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.919003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.919241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.919267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.919534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.919755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.919779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.920004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.920267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.920293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.920472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.920628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.920651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.920782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.920909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.920934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.921115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.921283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.921309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.921486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.921674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.921698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.921861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.922050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.922074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.922242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.922436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.922461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.922610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.922762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.922800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.922961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.923119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.923157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.923357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.923498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.923544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.923740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.923920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.373 [2024-04-18 13:50:01.923943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.373 qpair failed and we were unable to recover it. 00:20:59.373 [2024-04-18 13:50:01.924168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.924370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.924395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.924572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.924752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.924777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.924961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.925134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.925160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.925309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.925486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.925511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.925669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.925824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.925848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.926026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.926252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.926278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.926399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.926599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.926623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.926829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.926986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.927024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.927238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.927387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.927417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.927584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.927797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.927821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.928102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.928305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.928331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.928508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.928689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.928712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.928870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.929091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.929119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.929357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.929549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.929573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.929752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.929923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.929961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.930098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.930271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.930297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.930483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.930721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.930745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.930987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.931274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.931300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.931432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.931621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.931648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.931829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.932026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.932050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.932245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.932386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.932410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.932660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.932813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.932843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.933031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.933218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.933257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.933413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.933640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.374 [2024-04-18 13:50:01.933701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.374 qpair failed and we were unable to recover it. 00:20:59.374 [2024-04-18 13:50:01.933919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.934192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.934229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.934348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.934518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.934542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.934722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.934962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.934986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.935196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.935315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.935341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.935491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.935790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.935819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.936118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.936291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.936316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.936489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.936684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.936707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.936903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.937140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.937163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.937330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.937572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.937595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.937833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.938096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.938118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.938296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.938466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.938512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.938702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.938955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.938978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.939222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.939350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.939375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.939560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.939723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.939747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.939935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.940113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.940137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.940319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.940523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.940546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.940739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.940961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.940983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.941244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.941379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.941404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.941738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.942029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.942053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.942283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.942433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.942458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.942647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.942825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.942864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.943117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.943308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.943333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.943550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.943798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.943820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.944031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.944207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.944247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.944403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.944609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.944633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.944833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.944958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.944982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.945237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.945389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.945429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.945691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.945891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.375 [2024-04-18 13:50:01.945915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.375 qpair failed and we were unable to recover it. 00:20:59.375 [2024-04-18 13:50:01.946166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.946371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.946396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.946625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.946806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.946830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.947071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.947295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.947320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.947525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.947811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.947835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.948054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.948253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.948279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.948420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.948592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.948630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.948767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.948929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.948964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.949252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.949395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.949420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.949594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.949806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.949844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.950035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.950237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.950261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.950472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.950655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.950678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.950984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.951250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.951274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.951508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.951684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.951721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.951971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.952244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.952270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.952454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.952691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.952714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.952985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.953283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.953311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.953569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.953753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.953814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.954070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.954277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.954301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.954456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.954634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.954657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.954877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.955070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.955094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.955328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.955563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.955586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.955734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.955881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.955905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.956198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.956381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.956406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.956590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.956806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.956830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.957032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.957202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.957233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.957373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.957550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.957587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.957768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.957985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.958007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.958221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.958387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.958413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.376 qpair failed and we were unable to recover it. 00:20:59.376 [2024-04-18 13:50:01.958581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.376 [2024-04-18 13:50:01.958757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.958791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.959055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.959274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.959300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.959493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.959756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.959779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.959942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.960093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.960131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.960298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.960509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.960536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.960814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.961075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.961103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.961289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.961443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.961468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.961611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.961766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.961790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.962061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.962287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.962326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.962517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.962703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.962726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.962949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.963207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.963233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.963482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.963675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.963699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.963929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.964168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.964199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.964437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.964641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.964664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.964877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.965007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.965031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.965269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.965468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.965507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.965705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.965823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.965848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.966098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.966256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.966280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.966509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.966752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.966775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.966978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.967247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.967272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.967531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.967796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.967821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.968115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.968373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.968399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.968742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.969009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.969065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.969338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.969545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.969587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.969797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.970059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.970082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.970395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.970642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.970674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.970935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.971109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.971137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.971345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.971521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.971549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.377 qpair failed and we were unable to recover it. 00:20:59.377 [2024-04-18 13:50:01.971762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.377 [2024-04-18 13:50:01.971931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.971954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6458000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.972268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.972494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.972523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.972741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.972857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.972881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.973094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.973323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.973350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.973577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.973726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.973755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.973984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.974224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.974250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.974526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.974820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.974865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.975143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.975373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.975400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.975687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.975958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.976000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.976253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.976409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.976435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.976754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.977068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.977110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.977376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.977679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.977710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.978022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.978246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.978273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.978502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.978765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.978807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.979058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.979361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.979388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.979663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.979944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.979987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.980149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.980366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.980391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.980683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.980966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.981012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.981300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.981561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.981604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.981763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.981980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.982021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.982187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.982372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.982412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.982674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.982892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.982932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.983195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.983394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.983435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.983701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.984006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.984048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.984271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.984483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.984524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.984806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.984993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.985047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.985322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.985557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.985601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.985824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.985999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.986020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.378 [2024-04-18 13:50:01.986216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.986425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.378 [2024-04-18 13:50:01.986468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.378 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.986663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.986886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.986928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.987130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.987387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.987428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.987620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.987865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.987903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.988143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.988335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.988361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.988541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.988845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.988884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.989157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.989367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.989393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.989564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.989828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.989874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.990153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.990474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.990499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.990819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.991142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.991198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.991476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.991678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.991719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.991972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.992235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.992275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.992569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.992938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.992992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.993300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.993519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.993543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.993812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.994067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.994108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.994400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.994658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.994699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.994963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.995238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.995263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.995499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.995751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.995794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.996045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.996365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.996392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.996623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.996797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.996838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.997055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.997344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.997368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.997664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.997924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.997965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.998211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.998349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.998372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.998596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.998810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.998855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.999004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.999229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.379 [2024-04-18 13:50:01.999253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.379 qpair failed and we were unable to recover it. 00:20:59.379 [2024-04-18 13:50:01.999511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:01.999810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:01.999851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.000103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.000406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.000432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.000699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.001004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.001048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.001297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.001558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.001598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.001862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.002040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.002062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.002284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.002483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.002523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.002667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.002885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.002926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.003189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.003448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.003485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.003746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.004040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.004084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.004352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.004556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.004598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.004767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.005050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.005092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.005361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.005604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.005645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.005881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.006198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.006221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.006435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.006659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.006699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.006874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.007111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.007133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.007323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.007587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.007628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.007843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.008105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.008152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.008450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.008711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.008752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.008952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.009190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.009217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.009451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.009677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.009718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.009945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.010203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.010227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.010411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.010696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.010736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.010954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.011187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.011211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.011392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.011552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.011596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.011872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.012146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.012190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.012481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.012661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.012702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.012866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.013131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.013172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.013431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.013688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.013729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.380 qpair failed and we were unable to recover it. 00:20:59.380 [2024-04-18 13:50:02.013946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.380 [2024-04-18 13:50:02.014097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.014123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.014380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.014533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.014574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.014768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.014967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.015008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.015254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.015521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.015562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.015851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.016092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.016133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.016391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.016627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.016667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.016914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.017216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.017254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.017502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.017795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.017836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.018135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.018413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.018437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.018716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.018957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.018997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.019185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.019403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.019425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.019733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.020045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.020086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.020341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.020594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.020635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.020936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.021202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.021226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.021484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.021784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.021826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.022104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.022376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.022399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.022632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.022892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.022933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.023171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.023440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.023462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.023696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.023982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.024033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.024323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.024502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.024542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.024797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.024996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.025037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.025292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.025533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.025573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.025794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.026108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.026136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.026399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.026651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.026693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.026873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.027123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.027145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.027413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.027672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.027713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.027926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.028137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.028159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.028361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.028549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.381 [2024-04-18 13:50:02.028589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.381 qpair failed and we were unable to recover it. 00:20:59.381 [2024-04-18 13:50:02.028734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.028898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.028938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.029138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.029357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.029381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.029554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.029736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.029777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.030075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.030373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.030396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.030689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.030937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.030976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.031213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.031459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.031481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.031661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.031932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.031974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.032211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.032397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.032418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.032718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.032994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.033034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.033228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.033489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.033530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.033771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.034045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.034086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.034280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.034403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.034451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.034667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.034952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.034992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.035232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.035513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.035554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.035793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.036070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.036111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.036268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.036439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.036465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.036712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.037010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.037054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.037331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.037621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.037663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.037909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.038170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.038222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.038388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.038661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.038702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.038966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.039154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.039183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.039433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.039723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.039763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.039961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.040123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.040144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.040451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.040695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.040736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.040923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.041123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.041144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.041457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.041691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.041733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.041996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.042231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.042260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.382 [2024-04-18 13:50:02.042443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.042627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.382 [2024-04-18 13:50:02.042668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.382 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.042874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.043111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.043133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.043396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.043686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.043726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.043931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.044129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.044151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.044420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.044707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.044749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.045007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.045192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.045215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.045487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.045743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.045784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.046000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.046260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.046283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.046473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.046695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.046740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.046997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.047250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.047273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.047530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.047778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.047818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.048093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.048302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.048324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.048482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.048648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.048688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.048955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.049271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.049294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.049536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.049752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.049793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.050077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.050278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.050300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.050525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.050715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.050757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.050975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.051214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.051237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.051441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.051597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.051639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.051893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.052153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.052181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.052487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.052736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.052776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.053031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.053310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.053333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.053533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.053765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.053805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.053989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.054245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.054268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.054456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.054717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.054757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.055044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.055283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.055306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.055547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.055849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.055892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.056160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.056400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.056423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.056683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.056930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.383 [2024-04-18 13:50:02.056958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.383 qpair failed and we were unable to recover it. 00:20:59.383 [2024-04-18 13:50:02.057204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.057402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.057425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.057622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.057836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.057878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.058131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.058402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.058426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.058600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.058844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.058885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.059150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.059301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.059323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.059611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.059917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.059960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.060210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.060432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.060454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.060728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.060988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.061030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.061317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.061597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.061638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.061905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.062068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.062089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.062354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.062592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.062632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.062852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.063013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.063035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.063197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.063470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.063510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.063765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.063981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.064022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.064193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.064342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.064383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.064681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.064948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.064989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.065240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.065435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.065457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.065649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.065919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.065961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.066198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.066445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.066468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.066782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.067083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.067124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.067371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.067653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.067694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.067998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.068208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.068231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.068481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.068695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.068736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.068994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.069287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.069311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.384 [2024-04-18 13:50:02.069611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.069795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.384 [2024-04-18 13:50:02.069836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.384 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.070034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.070211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.070234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.070379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.070600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.070641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.070922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.071182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.071220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.071503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.071704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.071745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.071924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.072201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.072225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.072413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.072611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.072651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.072914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.073113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.073135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.073344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.073630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.073672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.073924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.074127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.074149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.074336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.074527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.074568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.074815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.075063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.075111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.075284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.075436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.075464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.075631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.075843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.075889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.076023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.076202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.076224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.076453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.076663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.076703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.076855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.077111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.077133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.077417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.077685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.077726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.077930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.078126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.078148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.078442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.078740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.078783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.079077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.079372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.079394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.079650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.079872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.079913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.080122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.080389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.080412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.080659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.080969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.081019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.081279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.081564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.081610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.081889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.082205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.082228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.082519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.082802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.082845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.083086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.083339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.083362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.083606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.083879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.083920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.385 qpair failed and we were unable to recover it. 00:20:59.385 [2024-04-18 13:50:02.084131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.385 [2024-04-18 13:50:02.084331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.084354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.084646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.084900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.084950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.085188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.085403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.085430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.085708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.085922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.085964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.086202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.086365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.086390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.086608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.086867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.086907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.087204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.087399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.087422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.087617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.087833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.087876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.088113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.088387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.088411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.088587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.088784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.088836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.089131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.089345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.089369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.089562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.089783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.089823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.090092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.090348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.090371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.090550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.090797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.090838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.091123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.091310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.091336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.091521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.091750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.091791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.091927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.092103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.092126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.092339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.092506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.092545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.092694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.092907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.092948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.093192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.093427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.093465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.093645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.093866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.093908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.094131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.094283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.094305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.094517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.094766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.094806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.095057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.095335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.095358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.095602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.095890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.095930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.096231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.096481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.096504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.096720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.096911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.096974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.097215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.097506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.386 [2024-04-18 13:50:02.097530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.386 qpair failed and we were unable to recover it. 00:20:59.386 [2024-04-18 13:50:02.097839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.098113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.098155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.098436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.098599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.098626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.098845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.099144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.099191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.099477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.099743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.099783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.099935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.100101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.100123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.100338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.100468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.100509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.100702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.100880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.100920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.101185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.101494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.101516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.101818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.102015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.102056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.102335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.102575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.102616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.102855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.103100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.103122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.103375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.103627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.103676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.103979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.104201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.104223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.104497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.104680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.104721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.104907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.105150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.105192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.105427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.105692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.105732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.105937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.106115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.106137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.106453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.106720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.106762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.106986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.107272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.107295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.107541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.107792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.107833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.108086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.108375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.108397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.108689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.109006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.109046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.109285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.109522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.109562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.109801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.110044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.110082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.110370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.110604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.110645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.110824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.111026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.111068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.111310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.111600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.111650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.111958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.112267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.112290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.387 qpair failed and we were unable to recover it. 00:20:59.387 [2024-04-18 13:50:02.112532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.387 [2024-04-18 13:50:02.112785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.112826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.113088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.113325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.113348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.113531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.113787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.113827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.114125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.114437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.114460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.114714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.115012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.115051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.115262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.115457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.115497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.115754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.116051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.116093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.116390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.116634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.116675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.116925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.117203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.117227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.117493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.117739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.117787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.118099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.118409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.118432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.118704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.119013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.119040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.119301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.119596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.119636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.119926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.120205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.120228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.120473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.120754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.120793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.121073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.121354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.121377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.121632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.121941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.121981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.122257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.122504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.122526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.122718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.123012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.123063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.123275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.123568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.123614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.123875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.124112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.124133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.124362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.124519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.124559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.124714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.124935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.124976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.125254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.125541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.125582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.125807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.126020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.126062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.388 qpair failed and we were unable to recover it. 00:20:59.388 [2024-04-18 13:50:02.126354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.388 [2024-04-18 13:50:02.126631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.126671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.126957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.127254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.127276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.127536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.127834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.127874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.128163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.128468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.128491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.128781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.129093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.129139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.129425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.129616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.129667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.129964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.130223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.130246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.130471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.130668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.130710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.130966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.131269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.131291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.131550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.131714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.131755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.131912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.132169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.132211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.132459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.132711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.132751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.132985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.133228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.133250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.133545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.133748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.133789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.134079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.134379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.134402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.134693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.134959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.135000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.135275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.135562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.135584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.135823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.136011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.136052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.136210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.136399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.136440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.136664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.136921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.136962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.137121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.137386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.137410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.137715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.137947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.137987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.138198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.138488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.138529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.138830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.139055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.139095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.139313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.139570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.139609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.139862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.140156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.140204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.140422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.140663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.140704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.141008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.141332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.141355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.389 qpair failed and we were unable to recover it. 00:20:59.389 [2024-04-18 13:50:02.141593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.141914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.389 [2024-04-18 13:50:02.141942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.142197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.142463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.142485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.142720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.142977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.143019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.143313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.143605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.143627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.143887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.144113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.144154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.144465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.144749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.144789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.144949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.145181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.145204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.145461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.145715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.145756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.145878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.146091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.146135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.146350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.146633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.146674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.146965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.147313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.147336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.147582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.147878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.147919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.148205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.148445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.148467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.148769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.149067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.149108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.149404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.149686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.149726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.150002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.150267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.150290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.150538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.150740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.150781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.151013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.151271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.151294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.151589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.151907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.151934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.152192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.152440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.152461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.152751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.153049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.153090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.153375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.153653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.153693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.153934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.154235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.154258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.154566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.154831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.154879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.155205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.155484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.155521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.390 qpair failed and we were unable to recover it. 00:20:59.390 [2024-04-18 13:50:02.155766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.390 [2024-04-18 13:50:02.156016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.156058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.156329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.156618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.156661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.156958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.157267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.157290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.157521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.157784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.157831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.158078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.158376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.158399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.158713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.159007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.159049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.159346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.159629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.159670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.159965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.160275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.160298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.160596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.160873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.160914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.161128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.161370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.161395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.161640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.161902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.161945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.162238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.162484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.162512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.162824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.163083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.163134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.163336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.163583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.163631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.163915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.164228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.164253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.164569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.164824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.164852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.165085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.165388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.391 [2024-04-18 13:50:02.165414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.391 qpair failed and we were unable to recover it. 00:20:59.391 [2024-04-18 13:50:02.165699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.166020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.166061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.663 qpair failed and we were unable to recover it. 00:20:59.663 [2024-04-18 13:50:02.166333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.166588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.166631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.663 qpair failed and we were unable to recover it. 00:20:59.663 [2024-04-18 13:50:02.166899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.167205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.167232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.663 qpair failed and we were unable to recover it. 00:20:59.663 [2024-04-18 13:50:02.167480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.167725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.167766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.663 qpair failed and we were unable to recover it. 00:20:59.663 [2024-04-18 13:50:02.167950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.168218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.168248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.663 qpair failed and we were unable to recover it. 00:20:59.663 [2024-04-18 13:50:02.168535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.168790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.168818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.663 qpair failed and we were unable to recover it. 00:20:59.663 [2024-04-18 13:50:02.169124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.169393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.169420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:20:59.663 qpair failed and we were unable to recover it. 00:20:59.663 [2024-04-18 13:50:02.169461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185c910 (9): Bad file descriptor 00:20:59.663 [2024-04-18 13:50:02.169797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.170079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.170110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.663 qpair failed and we were unable to recover it. 00:20:59.663 [2024-04-18 13:50:02.170412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.170666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.170694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.663 qpair failed and we were unable to recover it. 00:20:59.663 [2024-04-18 13:50:02.170992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.171264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.171290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.663 qpair failed and we were unable to recover it. 00:20:59.663 [2024-04-18 13:50:02.171539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.171766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.171790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.663 qpair failed and we were unable to recover it. 00:20:59.663 [2024-04-18 13:50:02.172020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.172253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.172278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.663 qpair failed and we were unable to recover it. 00:20:59.663 [2024-04-18 13:50:02.172576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.172842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.172869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.663 qpair failed and we were unable to recover it. 00:20:59.663 [2024-04-18 13:50:02.173135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.173429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.173472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.663 qpair failed and we were unable to recover it. 00:20:59.663 [2024-04-18 13:50:02.173718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.174006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.174033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.663 qpair failed and we were unable to recover it. 00:20:59.663 [2024-04-18 13:50:02.174334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.174586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.174613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.663 qpair failed and we were unable to recover it. 00:20:59.663 [2024-04-18 13:50:02.174882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.175193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.663 [2024-04-18 13:50:02.175239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.175504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.175755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.175782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.176030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.176290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.176315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.176633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.176885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.176912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.177145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.177458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.177483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.177737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.177989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.178016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.178312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.178600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.178627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.178873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.179136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.179162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.179398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.179685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.179717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.179984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.180262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.180288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.180572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.180837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.180864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.181168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.181499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.181527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.181837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.182131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.182158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.182481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.182735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.182762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.182974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.183262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.183288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.183596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.183801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.183826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.184084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.184374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.184402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.184693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.184997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.185024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.185307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.185574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.185601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.185826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.186071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.186120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.186355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.186607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.186634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.186927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.187225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.187253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.187512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.187796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.187823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.188071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.188362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.188390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.188641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.188892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.188916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.664 qpair failed and we were unable to recover it. 00:20:59.664 [2024-04-18 13:50:02.189201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.189396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.664 [2024-04-18 13:50:02.189421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.189732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.190019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.190047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.190347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.190633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.190661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.190956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.191240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.191267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.191519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.191771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.191798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.192042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.192325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.192354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.192652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.192945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.192972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.193237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.193495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.193523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.193789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.193994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.194019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.194319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.194618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.194645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.194943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.195185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.195213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.195476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.195737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.195765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.195965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.196228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.196256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.196544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.196834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.196861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.197188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.197404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.197428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.197650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.197891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.197918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.198201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.198490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.198517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.198824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.199106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.199133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.199437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.199735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.199759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.200078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.200380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.200405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.200642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.200932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.200985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.201279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.201573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.201601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.201867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.202154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.202188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.665 qpair failed and we were unable to recover it. 00:20:59.665 [2024-04-18 13:50:02.202522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.665 [2024-04-18 13:50:02.202828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.202867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.203153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.203458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.203485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.203739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.203953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.203980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.204250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.204498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.204522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.204781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.205091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.205119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.205377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.205666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.205694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.206006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.206268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.206297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.206457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.206642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.206669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.206824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.207129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.207184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.207374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.207526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.207566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.207688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.207942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.207969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.208184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.208372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.208404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.208614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.208783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.208807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.208944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.209106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.209130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.209340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.209520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.209547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.209744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.209919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.209954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.210118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.210323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.210352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.210572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.210730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.210757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.210913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.211068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.211091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.211254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.211388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.211430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.211625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.211780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.211807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.211970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.212147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.212175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.212326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.212524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.212551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.212700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.212828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.212856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.213064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.213268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.213293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.213421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.213576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.213600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.666 qpair failed and we were unable to recover it. 00:20:59.666 [2024-04-18 13:50:02.213784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.666 [2024-04-18 13:50:02.213946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.213973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.214185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.214365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.214390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.214554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.214734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.214759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.214932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.215083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.215110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.215289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.215491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.215519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.215672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.215871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.215898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.216039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.216252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.216280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.216476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.216650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.216677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.216840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.216997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.217024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.217225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.217383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.217410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.217548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.217689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.217711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.217902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.218094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.218120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.218296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.218468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.218495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.218636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.218778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.218800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.218955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.219104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.219131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.219325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.219469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.219496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.219687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.219846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.219873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.220084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.220256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.220281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.220440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.220630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.220657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.220820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.220926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.220948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.667 [2024-04-18 13:50:02.221133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.221312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.667 [2024-04-18 13:50:02.221340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.667 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.221538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.221700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.221727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.221923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.222080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.222102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.222269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.222419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.222444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.222637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.222784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.222806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.222949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.223122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.223144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.223318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.223478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.223502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.223652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.223772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.223795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.223935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.224078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.224100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.224284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.224466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.224490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.224645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.224829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.224851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.224997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.225118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.225140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.225287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.225417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.225442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.225629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.225738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.225775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.225952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.226092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.226114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.226293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.226434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.226474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.226618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.226726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.226748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.226950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.227081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.227108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.668 qpair failed and we were unable to recover it. 00:20:59.668 [2024-04-18 13:50:02.227265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.668 [2024-04-18 13:50:02.227414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.227439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.227584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.227767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.227794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.227941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.228071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.228099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.228240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.228388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.228412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.228604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.228727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.228755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.228888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.229038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.229065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.229244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.229370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.229395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.229543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.229681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.229707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.229872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.230028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.230059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.230257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.230411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.230436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.230604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.230741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.230768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.230891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.231058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.231085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.231244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.231373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.231398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.231574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.231727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.231754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.231905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.232046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.232073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.232206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.232366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.232390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.232535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.232655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.232682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.232825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.232977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.233004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.669 qpair failed and we were unable to recover it. 00:20:59.669 [2024-04-18 13:50:02.233133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.669 [2024-04-18 13:50:02.233322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.233347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.233476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.233639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.233666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.233837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.234002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.234029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.234205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.234337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.234361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.234548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.234718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.234745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.234866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.235016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.235044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.235160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.235303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.235327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.235432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.235582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.235610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.235761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.235919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.235946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.236059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.236244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.236268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.236388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.236544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.236571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.236728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.236901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.236928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.237119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.237243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.237268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.237410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.237603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.237630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.237855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.237968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.237996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.238124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.238293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.238318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.238489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.238668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.238696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.238860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.239025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.239052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.239241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.239388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.239412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.239579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.239754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.239781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.239970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.240114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.240142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.240282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.240401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.240426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.240611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.240808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.240834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.670 qpair failed and we were unable to recover it. 00:20:59.670 [2024-04-18 13:50:02.241060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.670 [2024-04-18 13:50:02.241245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.241270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.241407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.241547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.241569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.241705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.241865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.241892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.242040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.242192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.242235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.242353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.242490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.242513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.242678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.242826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.242853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.243000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.243181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.243209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.243370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.243543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.243565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.243794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.243991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.244018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.244198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.244351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.244375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.244542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.244707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.244734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.244915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.245062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.245089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.245287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.245426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.245469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.245619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.245753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.245775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.245964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.246118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.246145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.246299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.246415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.246439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.246594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.246766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.246789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.246931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.247069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.247096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.247246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.247390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.247421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.247539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.247675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.247697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.247854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.247996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.248023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.671 [2024-04-18 13:50:02.248210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.248351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.671 [2024-04-18 13:50:02.248378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.671 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.248527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.248644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.248666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.248853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.249000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.249027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.249172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.249323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.249351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.249476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.249666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.249706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.249860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.250041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.250068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.250222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.250395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.250422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.250563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.250668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.250690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.250890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.251064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.251091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.251240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.251387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.251414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.251524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.251679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.251701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.251857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.252000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.252027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.252157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.252309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.252336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.252505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.252640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.252677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.252798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.252941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.252968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.253120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.253270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.253298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.253424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.253561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.253583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.253740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.253852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.253879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.254033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.254213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.254241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.254362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.254473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.254495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.254662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.254840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.254867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.255049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.255200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.255242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.672 qpair failed and we were unable to recover it. 00:20:59.672 [2024-04-18 13:50:02.255406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.255515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.672 [2024-04-18 13:50:02.255538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.255660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.255836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.255863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.256015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.256131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.256158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.256340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.256444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.256466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.256601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.256770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.256797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.256982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.257096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.257123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.257280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.257392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.257416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.257557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.257689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.257715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.257890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.258061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.258088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.258245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.258407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.258447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.258586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.258735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.258762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.258905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.259063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.259090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.259272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.259448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.259476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.259624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.259781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.259808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.259956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.260076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.260103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.260214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.260357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.260380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.260520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.260682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.260709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.260860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.261002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.261029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.261186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.261336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.261374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.261519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.261690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.261717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.261877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.262026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.262053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.673 qpair failed and we were unable to recover it. 00:20:59.673 [2024-04-18 13:50:02.262235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.673 [2024-04-18 13:50:02.262411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.262439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.262614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.262759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.262786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.262963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.263088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.263115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.263246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.263424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.263447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.263616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.263759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.263786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.263936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.264052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.264084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.264251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.264395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.264418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.264570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.264721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.264748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.264906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.265026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.265053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.265211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.265323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.265346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.265524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.265698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.265725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.265871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.266040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.266067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.266262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.266399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.266439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.266582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.266756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.266783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.266939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.267060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.267087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.267248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.267352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.267378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.267561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.267672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.267699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.267844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.268016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.268043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.268214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.268322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.268345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.268504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.268609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.268636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.268810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.268988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.269015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.674 qpair failed and we were unable to recover it. 00:20:59.674 [2024-04-18 13:50:02.269138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.674 [2024-04-18 13:50:02.269306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.269331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.269449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.269635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.269661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.269816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.269995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.270021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.270206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.270389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.270412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.270556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.270729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.270756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.270910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.271057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.271084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.271244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.271384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.271407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.271587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.271734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.271761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.271910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.272024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.272051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.272213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.272349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.272372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.272535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.272688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.272715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.272866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.273012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.273039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.273203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.273359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.273397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.273542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.273690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.273717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.273872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.273999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.274026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.274199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.274337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.274361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.274494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.274618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.274644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.274797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.274949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.274976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.275110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.275280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.275304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.275469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.275617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.275643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.275822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.275969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.275997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.276189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.276344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.276367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.675 [2024-04-18 13:50:02.276492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.276615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.675 [2024-04-18 13:50:02.276642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.675 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.276793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.276947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.276975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.277130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.277284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.277308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.277421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.277572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.277600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.277752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.277927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.277954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.278083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.278213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.278252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.278405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.278533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.278560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.278679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.278803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.278830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.278970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.279107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.279129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.279262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.279378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.279401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.279525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.279684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.279711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.279893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.280067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.280094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.280246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.280389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.280416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.280561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.280739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.280767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.280881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.281016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.281039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.281192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.281343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.281371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.281520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.281638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.281666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.281824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.281946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.281969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.282130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.282284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.282312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.282464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.282626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.282653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.282838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.282975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.283011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.283129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.283246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.283275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.283410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.283584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.283611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.283785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.283889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.283915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.284098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.284238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.284267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.284393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.284535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.284563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.284716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.284819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.284842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.285033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.285206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.285234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.285374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.285525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.285553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.676 qpair failed and we were unable to recover it. 00:20:59.676 [2024-04-18 13:50:02.285710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.676 [2024-04-18 13:50:02.285843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.285865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.285999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.286145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.286173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.286358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.286505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.286533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.286723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.286875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.286902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.287049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.287203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.287232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.287369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.287484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.287512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.287665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.287883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.287911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.288063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.288235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.288259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.288399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.288548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.288575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.288755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.288865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.288887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.289130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.289298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.289326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.289476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.289643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.289670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.289839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.289952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.289975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.290114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.290280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.290308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.290448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.290598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.290626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.290783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.290947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.290987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.291208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.291349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.291377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.291534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.291676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.291703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.291879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.292142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.292170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.292338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.292460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.292487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.292719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.292901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.292928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.293077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.293245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.293286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.293431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.293608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.293635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.293873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.294021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.294048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.294223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.294358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.294397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.294553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.294725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.294752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.294986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.295144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.295171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.295335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.295488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.295510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.295668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.295811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.295838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.677 qpair failed and we were unable to recover it. 00:20:59.677 [2024-04-18 13:50:02.296018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.677 [2024-04-18 13:50:02.296261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.296290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.296439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.296667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.296695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.296888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.297076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.297103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.297267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.297410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.297438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.297573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.297702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.297724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.297907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.298127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.298154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.298314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.298471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.298498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.298650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.298757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.298779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.298931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.299054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.299081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.299260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.299409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.299437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.299583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.299749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.299787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.299945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.300145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.300174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.300309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.300491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.300518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.300701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.300812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.300834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.300984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.301185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.301213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.301365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.301520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.301547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.301732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.301871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.301916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.302091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.302250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.302278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.302439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.302647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.302674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.302836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.302972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.302994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.303131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.303293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.303321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.303475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.303649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.303676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.303857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.304001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.304043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.304211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.304337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.304364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.304521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.304646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.304698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.304926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.305074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.305101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.305280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.305425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.305452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.678 qpair failed and we were unable to recover it. 00:20:59.678 [2024-04-18 13:50:02.305594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.678 [2024-04-18 13:50:02.305804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.305855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.679 qpair failed and we were unable to recover it. 00:20:59.679 [2024-04-18 13:50:02.305998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.306110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.306132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.679 qpair failed and we were unable to recover it. 00:20:59.679 [2024-04-18 13:50:02.306296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.306461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.306498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.679 qpair failed and we were unable to recover it. 00:20:59.679 [2024-04-18 13:50:02.306656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.306832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.306859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.679 qpair failed and we were unable to recover it. 00:20:59.679 [2024-04-18 13:50:02.306973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.307244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.307273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.679 qpair failed and we were unable to recover it. 00:20:59.679 [2024-04-18 13:50:02.307421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.307567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.307594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.679 qpair failed and we were unable to recover it. 00:20:59.679 [2024-04-18 13:50:02.307771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.308020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.308056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.679 qpair failed and we were unable to recover it. 00:20:59.679 [2024-04-18 13:50:02.308223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.308404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.308432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.679 qpair failed and we were unable to recover it. 00:20:59.679 [2024-04-18 13:50:02.308575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.308803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.308830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.679 qpair failed and we were unable to recover it. 00:20:59.679 [2024-04-18 13:50:02.308999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.309211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.309239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.679 qpair failed and we were unable to recover it. 00:20:59.679 [2024-04-18 13:50:02.309398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.309571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.309608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.679 qpair failed and we were unable to recover it. 00:20:59.679 [2024-04-18 13:50:02.309786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.309933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.309960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.679 qpair failed and we were unable to recover it. 00:20:59.679 [2024-04-18 13:50:02.310136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.310262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.310290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.679 qpair failed and we were unable to recover it. 00:20:59.679 [2024-04-18 13:50:02.310441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.310640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.310666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.679 qpair failed and we were unable to recover it. 00:20:59.679 [2024-04-18 13:50:02.310812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.310950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.679 [2024-04-18 13:50:02.310976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.680 qpair failed and we were unable to recover it. 00:20:59.680 [2024-04-18 13:50:02.311140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.311304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.311332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.680 qpair failed and we were unable to recover it. 00:20:59.680 [2024-04-18 13:50:02.311511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.311641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.311663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.680 qpair failed and we were unable to recover it. 00:20:59.680 [2024-04-18 13:50:02.311899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.312052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.312084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.680 qpair failed and we were unable to recover it. 00:20:59.680 [2024-04-18 13:50:02.312218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.312372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.312399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.680 qpair failed and we were unable to recover it. 00:20:59.680 [2024-04-18 13:50:02.312549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.312691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.312713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.680 qpair failed and we were unable to recover it. 00:20:59.680 [2024-04-18 13:50:02.312902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.313138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.313165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.680 qpair failed and we were unable to recover it. 00:20:59.680 [2024-04-18 13:50:02.313323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.313472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.313499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.680 qpair failed and we were unable to recover it. 00:20:59.680 [2024-04-18 13:50:02.313649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.313847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.313874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.680 qpair failed and we were unable to recover it. 00:20:59.680 [2024-04-18 13:50:02.314058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.314204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.680 [2024-04-18 13:50:02.314233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.680 qpair failed and we were unable to recover it. 00:20:59.680 [2024-04-18 13:50:02.314356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.681 [2024-04-18 13:50:02.314504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.681 [2024-04-18 13:50:02.314531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.681 qpair failed and we were unable to recover it. 00:20:59.681 [2024-04-18 13:50:02.314680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.681 [2024-04-18 13:50:02.314814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.681 [2024-04-18 13:50:02.314852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.681 qpair failed and we were unable to recover it. 00:20:59.681 [2024-04-18 13:50:02.315050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.681 [2024-04-18 13:50:02.315235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.681 [2024-04-18 13:50:02.315259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.681 qpair failed and we were unable to recover it. 00:20:59.682 [2024-04-18 13:50:02.315396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.682 [2024-04-18 13:50:02.315552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.682 [2024-04-18 13:50:02.315579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.682 qpair failed and we were unable to recover it. 00:20:59.682 [2024-04-18 13:50:02.315748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.682 [2024-04-18 13:50:02.315890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.682 [2024-04-18 13:50:02.315926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.682 qpair failed and we were unable to recover it. 00:20:59.682 [2024-04-18 13:50:02.316076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.682 [2024-04-18 13:50:02.316207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.682 [2024-04-18 13:50:02.316235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.682 qpair failed and we were unable to recover it. 00:20:59.682 [2024-04-18 13:50:02.316391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.682 [2024-04-18 13:50:02.316520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.682 [2024-04-18 13:50:02.316547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.682 qpair failed and we were unable to recover it. 00:20:59.682 [2024-04-18 13:50:02.316703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.682 [2024-04-18 13:50:02.316803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.682 [2024-04-18 13:50:02.316825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.682 qpair failed and we were unable to recover it. 00:20:59.682 [2024-04-18 13:50:02.316979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.682 [2024-04-18 13:50:02.317155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.682 [2024-04-18 13:50:02.317188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.682 qpair failed and we were unable to recover it. 00:20:59.682 [2024-04-18 13:50:02.317367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.682 [2024-04-18 13:50:02.317538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.682 [2024-04-18 13:50:02.317564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.682 qpair failed and we were unable to recover it. 00:20:59.683 [2024-04-18 13:50:02.317736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.683 [2024-04-18 13:50:02.317868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.683 [2024-04-18 13:50:02.317890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.683 qpair failed and we were unable to recover it. 00:20:59.683 [2024-04-18 13:50:02.318018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.683 [2024-04-18 13:50:02.318181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.683 [2024-04-18 13:50:02.318208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.683 qpair failed and we were unable to recover it. 00:20:59.683 [2024-04-18 13:50:02.318421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.683 [2024-04-18 13:50:02.318565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.683 [2024-04-18 13:50:02.318592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.683 qpair failed and we were unable to recover it. 00:20:59.683 [2024-04-18 13:50:02.318734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.683 [2024-04-18 13:50:02.318894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.683 [2024-04-18 13:50:02.318931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.683 qpair failed and we were unable to recover it. 00:20:59.683 [2024-04-18 13:50:02.319081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.683 [2024-04-18 13:50:02.319225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.683 [2024-04-18 13:50:02.319253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.683 qpair failed and we were unable to recover it. 00:20:59.683 [2024-04-18 13:50:02.319401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.683 [2024-04-18 13:50:02.319543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.683 [2024-04-18 13:50:02.319570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.683 qpair failed and we were unable to recover it. 00:20:59.683 [2024-04-18 13:50:02.319682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.683 [2024-04-18 13:50:02.319889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.683 [2024-04-18 13:50:02.319933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.683 qpair failed and we were unable to recover it. 00:20:59.683 [2024-04-18 13:50:02.320059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.683 [2024-04-18 13:50:02.320190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.683 [2024-04-18 13:50:02.320219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.683 qpair failed and we were unable to recover it. 00:20:59.683 [2024-04-18 13:50:02.320393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.320631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.320658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.684 qpair failed and we were unable to recover it. 00:20:59.684 [2024-04-18 13:50:02.320845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.320984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.321025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.684 qpair failed and we were unable to recover it. 00:20:59.684 [2024-04-18 13:50:02.321170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.321325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.321352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.684 qpair failed and we were unable to recover it. 00:20:59.684 [2024-04-18 13:50:02.321607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.321727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.321754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.684 qpair failed and we were unable to recover it. 00:20:59.684 [2024-04-18 13:50:02.321951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.322111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.322138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.684 qpair failed and we were unable to recover it. 00:20:59.684 [2024-04-18 13:50:02.322325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.322436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.322460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.684 qpair failed and we were unable to recover it. 00:20:59.684 [2024-04-18 13:50:02.322724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.322904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.322932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.684 qpair failed and we were unable to recover it. 00:20:59.684 [2024-04-18 13:50:02.323113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.323227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.323250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.684 qpair failed and we were unable to recover it. 00:20:59.684 [2024-04-18 13:50:02.323439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.323609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.684 [2024-04-18 13:50:02.323636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.684 qpair failed and we were unable to recover it. 00:20:59.684 [2024-04-18 13:50:02.323796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.685 [2024-04-18 13:50:02.323952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.685 [2024-04-18 13:50:02.323979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.685 qpair failed and we were unable to recover it. 00:20:59.685 [2024-04-18 13:50:02.324152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.685 [2024-04-18 13:50:02.324338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.685 [2024-04-18 13:50:02.324374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.685 qpair failed and we were unable to recover it. 00:20:59.685 [2024-04-18 13:50:02.324530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.685 [2024-04-18 13:50:02.324696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.685 [2024-04-18 13:50:02.324723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.685 qpair failed and we were unable to recover it. 00:20:59.685 [2024-04-18 13:50:02.324900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.685 [2024-04-18 13:50:02.325012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.685 [2024-04-18 13:50:02.325039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.685 qpair failed and we were unable to recover it. 00:20:59.685 [2024-04-18 13:50:02.325186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.685 [2024-04-18 13:50:02.325335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.685 [2024-04-18 13:50:02.325357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.685 qpair failed and we were unable to recover it. 00:20:59.685 [2024-04-18 13:50:02.325498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.685 [2024-04-18 13:50:02.325678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.685 [2024-04-18 13:50:02.325705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.685 qpair failed and we were unable to recover it. 00:20:59.685 [2024-04-18 13:50:02.325911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.685 [2024-04-18 13:50:02.326082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.685 [2024-04-18 13:50:02.326109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.685 qpair failed and we were unable to recover it. 00:20:59.685 [2024-04-18 13:50:02.326250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.685 [2024-04-18 13:50:02.326390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.685 [2024-04-18 13:50:02.326413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.686 qpair failed and we were unable to recover it. 00:20:59.686 [2024-04-18 13:50:02.326542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.326656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.326683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.686 qpair failed and we were unable to recover it. 00:20:59.686 [2024-04-18 13:50:02.326829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.326943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.326970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.686 qpair failed and we were unable to recover it. 00:20:59.686 [2024-04-18 13:50:02.327109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.327213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.327237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.686 qpair failed and we were unable to recover it. 00:20:59.686 [2024-04-18 13:50:02.327337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.327462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.327492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.686 qpair failed and we were unable to recover it. 00:20:59.686 [2024-04-18 13:50:02.327645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.327773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.327800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.686 qpair failed and we were unable to recover it. 00:20:59.686 [2024-04-18 13:50:02.327957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.328094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.328115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.686 qpair failed and we were unable to recover it. 00:20:59.686 [2024-04-18 13:50:02.328317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.328452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.328489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.686 qpair failed and we were unable to recover it. 00:20:59.686 [2024-04-18 13:50:02.328646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.328767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.328794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.686 qpair failed and we were unable to recover it. 00:20:59.686 [2024-04-18 13:50:02.328920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.329034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.686 [2024-04-18 13:50:02.329055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.686 qpair failed and we were unable to recover it. 00:20:59.687 [2024-04-18 13:50:02.329245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.687 [2024-04-18 13:50:02.329366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.687 [2024-04-18 13:50:02.329393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.687 qpair failed and we were unable to recover it. 00:20:59.687 [2024-04-18 13:50:02.329575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.687 [2024-04-18 13:50:02.329724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.687 [2024-04-18 13:50:02.329751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.687 qpair failed and we were unable to recover it. 00:20:59.687 [2024-04-18 13:50:02.329891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.687 [2024-04-18 13:50:02.330056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.687 [2024-04-18 13:50:02.330094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.687 qpair failed and we were unable to recover it. 00:20:59.687 [2024-04-18 13:50:02.330269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.687 [2024-04-18 13:50:02.330417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.687 [2024-04-18 13:50:02.330445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.687 qpair failed and we were unable to recover it. 00:20:59.687 [2024-04-18 13:50:02.330623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.687 [2024-04-18 13:50:02.330790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.687 [2024-04-18 13:50:02.330817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.687 qpair failed and we were unable to recover it. 00:20:59.687 [2024-04-18 13:50:02.330996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.687 [2024-04-18 13:50:02.331140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.687 [2024-04-18 13:50:02.331187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.687 qpair failed and we were unable to recover it. 00:20:59.687 [2024-04-18 13:50:02.331363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.687 [2024-04-18 13:50:02.331538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.687 [2024-04-18 13:50:02.331566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.687 qpair failed and we were unable to recover it. 00:20:59.687 [2024-04-18 13:50:02.331740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.687 [2024-04-18 13:50:02.331911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.687 [2024-04-18 13:50:02.331938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.687 qpair failed and we were unable to recover it. 00:20:59.688 [2024-04-18 13:50:02.332096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.688 [2024-04-18 13:50:02.332219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.688 [2024-04-18 13:50:02.332242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.688 qpair failed and we were unable to recover it. 00:20:59.688 [2024-04-18 13:50:02.332410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.688 [2024-04-18 13:50:02.332582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.688 [2024-04-18 13:50:02.332609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.688 qpair failed and we were unable to recover it. 00:20:59.688 [2024-04-18 13:50:02.332758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.688 [2024-04-18 13:50:02.332905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.688 [2024-04-18 13:50:02.332932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.688 qpair failed and we were unable to recover it. 00:20:59.688 [2024-04-18 13:50:02.333107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.688 [2024-04-18 13:50:02.333256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.688 [2024-04-18 13:50:02.333280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.688 qpair failed and we were unable to recover it. 00:20:59.688 [2024-04-18 13:50:02.333441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.688 [2024-04-18 13:50:02.333618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.688 [2024-04-18 13:50:02.333645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.688 qpair failed and we were unable to recover it. 00:20:59.688 [2024-04-18 13:50:02.333798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.688 [2024-04-18 13:50:02.333908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.688 [2024-04-18 13:50:02.333935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.688 qpair failed and we were unable to recover it. 00:20:59.688 [2024-04-18 13:50:02.334075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.688 [2024-04-18 13:50:02.334219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.688 [2024-04-18 13:50:02.334243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.688 qpair failed and we were unable to recover it. 00:20:59.688 [2024-04-18 13:50:02.334399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.688 [2024-04-18 13:50:02.334542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.688 [2024-04-18 13:50:02.334569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.688 qpair failed and we were unable to recover it. 00:20:59.689 [2024-04-18 13:50:02.334744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.689 [2024-04-18 13:50:02.334892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.689 [2024-04-18 13:50:02.334919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.689 qpair failed and we were unable to recover it. 00:20:59.689 [2024-04-18 13:50:02.335099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.689 [2024-04-18 13:50:02.335282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.689 [2024-04-18 13:50:02.335310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.689 qpair failed and we were unable to recover it. 00:20:59.689 [2024-04-18 13:50:02.335461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.689 [2024-04-18 13:50:02.335607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.689 [2024-04-18 13:50:02.335634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.689 qpair failed and we were unable to recover it. 00:20:59.689 [2024-04-18 13:50:02.335809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.689 [2024-04-18 13:50:02.335953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.689 [2024-04-18 13:50:02.335979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.689 qpair failed and we were unable to recover it. 00:20:59.689 [2024-04-18 13:50:02.336163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.689 [2024-04-18 13:50:02.336337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.689 [2024-04-18 13:50:02.336360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.689 qpair failed and we were unable to recover it. 00:20:59.689 [2024-04-18 13:50:02.336508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.689 [2024-04-18 13:50:02.336626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.689 [2024-04-18 13:50:02.336653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.689 qpair failed and we were unable to recover it. 00:20:59.689 [2024-04-18 13:50:02.336795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.689 [2024-04-18 13:50:02.336945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.689 [2024-04-18 13:50:02.336972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.689 qpair failed and we were unable to recover it. 00:20:59.689 [2024-04-18 13:50:02.337134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.689 [2024-04-18 13:50:02.337281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.337308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.690 qpair failed and we were unable to recover it. 00:20:59.690 [2024-04-18 13:50:02.337445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.337565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.337592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.690 qpair failed and we were unable to recover it. 00:20:59.690 [2024-04-18 13:50:02.337756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.337937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.337964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.690 qpair failed and we were unable to recover it. 00:20:59.690 [2024-04-18 13:50:02.338183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.338337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.338364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.690 qpair failed and we were unable to recover it. 00:20:59.690 [2024-04-18 13:50:02.338540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.338771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.338798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.690 qpair failed and we were unable to recover it. 00:20:59.690 [2024-04-18 13:50:02.338938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.339056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.339083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.690 qpair failed and we were unable to recover it. 00:20:59.690 [2024-04-18 13:50:02.339211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.339371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.339394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.690 qpair failed and we were unable to recover it. 00:20:59.690 [2024-04-18 13:50:02.339684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.339859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.339886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.690 qpair failed and we were unable to recover it. 00:20:59.690 [2024-04-18 13:50:02.340062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.340206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.340234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.690 qpair failed and we were unable to recover it. 00:20:59.690 [2024-04-18 13:50:02.340384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.340602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.690 [2024-04-18 13:50:02.340629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.690 qpair failed and we were unable to recover it. 00:20:59.690 [2024-04-18 13:50:02.340853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.340977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.341021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.691 qpair failed and we were unable to recover it. 00:20:59.691 [2024-04-18 13:50:02.341164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.341322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.341350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.691 qpair failed and we were unable to recover it. 00:20:59.691 [2024-04-18 13:50:02.341530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.341743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.341770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.691 qpair failed and we were unable to recover it. 00:20:59.691 [2024-04-18 13:50:02.341922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.342075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.342102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.691 qpair failed and we were unable to recover it. 00:20:59.691 [2024-04-18 13:50:02.342285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.342431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.342458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.691 qpair failed and we were unable to recover it. 00:20:59.691 [2024-04-18 13:50:02.342633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.342756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.342777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.691 qpair failed and we were unable to recover it. 00:20:59.691 [2024-04-18 13:50:02.343002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.343151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.343185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.691 qpair failed and we were unable to recover it. 00:20:59.691 [2024-04-18 13:50:02.343348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.343484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.343511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.691 qpair failed and we were unable to recover it. 00:20:59.691 [2024-04-18 13:50:02.343629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.343799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.343822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.691 qpair failed and we were unable to recover it. 00:20:59.691 [2024-04-18 13:50:02.344042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.691 [2024-04-18 13:50:02.344206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.344235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.692 qpair failed and we were unable to recover it. 00:20:59.692 [2024-04-18 13:50:02.344358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.344477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.344504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.692 qpair failed and we were unable to recover it. 00:20:59.692 [2024-04-18 13:50:02.344733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.344937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.344964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.692 qpair failed and we were unable to recover it. 00:20:59.692 [2024-04-18 13:50:02.345156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.345276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.345303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.692 qpair failed and we were unable to recover it. 00:20:59.692 [2024-04-18 13:50:02.345431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.345588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.345616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.692 qpair failed and we were unable to recover it. 00:20:59.692 [2024-04-18 13:50:02.345788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.345988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.346015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.692 qpair failed and we were unable to recover it. 00:20:59.692 [2024-04-18 13:50:02.346188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.346334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.346361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.692 qpair failed and we were unable to recover it. 00:20:59.692 [2024-04-18 13:50:02.346524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.346703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.346730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.692 qpair failed and we were unable to recover it. 00:20:59.692 [2024-04-18 13:50:02.346914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.347087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.347114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.692 qpair failed and we were unable to recover it. 00:20:59.692 [2024-04-18 13:50:02.347316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.347462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.347489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.692 qpair failed and we were unable to recover it. 00:20:59.692 [2024-04-18 13:50:02.347713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.347861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.347888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.692 qpair failed and we were unable to recover it. 00:20:59.692 [2024-04-18 13:50:02.348091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.348278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.348306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.692 qpair failed and we were unable to recover it. 00:20:59.692 [2024-04-18 13:50:02.348438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.692 [2024-04-18 13:50:02.348636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.348663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.693 qpair failed and we were unable to recover it. 00:20:59.693 [2024-04-18 13:50:02.348788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.348966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.348993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.693 qpair failed and we were unable to recover it. 00:20:59.693 [2024-04-18 13:50:02.349172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.349316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.349357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.693 qpair failed and we were unable to recover it. 00:20:59.693 [2024-04-18 13:50:02.349483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.349631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.349658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.693 qpair failed and we were unable to recover it. 00:20:59.693 [2024-04-18 13:50:02.349809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.349966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.349994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.693 qpair failed and we were unable to recover it. 00:20:59.693 [2024-04-18 13:50:02.350190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.350350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.350373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.693 qpair failed and we were unable to recover it. 00:20:59.693 [2024-04-18 13:50:02.350562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.350753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.350790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.693 qpair failed and we were unable to recover it. 00:20:59.693 [2024-04-18 13:50:02.350958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.351133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.351160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.693 qpair failed and we were unable to recover it. 00:20:59.693 [2024-04-18 13:50:02.351353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.351517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.351553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.693 qpair failed and we were unable to recover it. 00:20:59.693 [2024-04-18 13:50:02.351672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.351840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.351868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.693 qpair failed and we were unable to recover it. 00:20:59.693 [2024-04-18 13:50:02.352029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.352202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.352231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.693 qpair failed and we were unable to recover it. 00:20:59.693 [2024-04-18 13:50:02.352386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.352526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.352548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.693 qpair failed and we were unable to recover it. 00:20:59.693 [2024-04-18 13:50:02.352684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.352892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.693 [2024-04-18 13:50:02.352919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.694 qpair failed and we were unable to recover it. 00:20:59.694 [2024-04-18 13:50:02.353062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.353221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.353260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.694 qpair failed and we were unable to recover it. 00:20:59.694 [2024-04-18 13:50:02.353392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.353538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.353575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.694 qpair failed and we were unable to recover it. 00:20:59.694 [2024-04-18 13:50:02.353754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.353902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.353929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.694 qpair failed and we were unable to recover it. 00:20:59.694 [2024-04-18 13:50:02.354079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.354204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.354232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.694 qpair failed and we were unable to recover it. 00:20:59.694 [2024-04-18 13:50:02.354367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.354612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.354640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.694 qpair failed and we were unable to recover it. 00:20:59.694 [2024-04-18 13:50:02.354819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.355066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.355093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.694 qpair failed and we were unable to recover it. 00:20:59.694 [2024-04-18 13:50:02.355231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.355378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.355406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.694 qpair failed and we were unable to recover it. 00:20:59.694 [2024-04-18 13:50:02.355561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.355754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.355781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.694 qpair failed and we were unable to recover it. 00:20:59.694 [2024-04-18 13:50:02.355934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.356146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.356173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.694 qpair failed and we were unable to recover it. 00:20:59.694 [2024-04-18 13:50:02.356356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.356516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.356543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.694 qpair failed and we were unable to recover it. 00:20:59.694 [2024-04-18 13:50:02.356773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.356914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.356941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.694 qpair failed and we were unable to recover it. 00:20:59.694 [2024-04-18 13:50:02.357196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.357452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.357482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.694 qpair failed and we were unable to recover it. 00:20:59.694 [2024-04-18 13:50:02.357665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.694 [2024-04-18 13:50:02.357806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.695 [2024-04-18 13:50:02.357833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.695 qpair failed and we were unable to recover it. 00:20:59.695 [2024-04-18 13:50:02.357963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.695 [2024-04-18 13:50:02.358084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.695 [2024-04-18 13:50:02.358106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.695 qpair failed and we were unable to recover it. 00:20:59.695 [2024-04-18 13:50:02.358233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.695 [2024-04-18 13:50:02.358382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.695 [2024-04-18 13:50:02.358409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.695 qpair failed and we were unable to recover it. 00:20:59.695 [2024-04-18 13:50:02.358559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.695 [2024-04-18 13:50:02.358707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.695 [2024-04-18 13:50:02.358734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.695 qpair failed and we were unable to recover it. 00:20:59.695 [2024-04-18 13:50:02.358875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.695 [2024-04-18 13:50:02.359121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.695 [2024-04-18 13:50:02.359148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.695 qpair failed and we were unable to recover it. 00:20:59.695 [2024-04-18 13:50:02.359275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.695 [2024-04-18 13:50:02.359442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.695 [2024-04-18 13:50:02.359468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.695 qpair failed and we were unable to recover it. 00:20:59.695 [2024-04-18 13:50:02.359649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.695 [2024-04-18 13:50:02.359815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.359842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.696 qpair failed and we were unable to recover it. 00:20:59.696 [2024-04-18 13:50:02.360084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.360234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.360258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.696 qpair failed and we were unable to recover it. 00:20:59.696 [2024-04-18 13:50:02.360364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.360530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.360557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.696 qpair failed and we were unable to recover it. 00:20:59.696 [2024-04-18 13:50:02.360713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.360852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.360879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.696 qpair failed and we were unable to recover it. 00:20:59.696 [2024-04-18 13:50:02.361079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.361251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.361293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.696 qpair failed and we were unable to recover it. 00:20:59.696 [2024-04-18 13:50:02.361443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.361581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.361608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.696 qpair failed and we were unable to recover it. 00:20:59.696 [2024-04-18 13:50:02.361734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.361958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.361985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.696 qpair failed and we were unable to recover it. 00:20:59.696 [2024-04-18 13:50:02.362164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.362338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.362362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.696 qpair failed and we were unable to recover it. 00:20:59.696 [2024-04-18 13:50:02.362515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.362736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.362763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.696 qpair failed and we were unable to recover it. 00:20:59.696 [2024-04-18 13:50:02.362925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.363052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.363079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.696 qpair failed and we were unable to recover it. 00:20:59.696 [2024-04-18 13:50:02.363271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.363445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.363473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.696 qpair failed and we were unable to recover it. 00:20:59.696 [2024-04-18 13:50:02.363687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.363833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.363860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.696 qpair failed and we were unable to recover it. 00:20:59.696 [2024-04-18 13:50:02.364032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.364161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.696 [2024-04-18 13:50:02.364195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-04-18 13:50:02.364360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.364527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.364566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-04-18 13:50:02.364737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.364918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.364945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-04-18 13:50:02.365183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.365325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.365353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-04-18 13:50:02.365545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.365790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.365817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-04-18 13:50:02.365973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.366161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.366195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-04-18 13:50:02.366357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.366548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.366575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-04-18 13:50:02.366703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.366858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.366880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-04-18 13:50:02.367025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.367201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.367229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-04-18 13:50:02.367497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.367678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.367705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-04-18 13:50:02.367849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.368012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.368034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-04-18 13:50:02.368175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.368338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.368365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-04-18 13:50:02.368524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.368717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.697 [2024-04-18 13:50:02.368744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.698 [2024-04-18 13:50:02.368933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.369111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.369138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.698 qpair failed and we were unable to recover it. 00:20:59.698 [2024-04-18 13:50:02.369391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.369575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.369605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.698 qpair failed and we were unable to recover it. 00:20:59.698 [2024-04-18 13:50:02.369808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.369941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.369968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.698 qpair failed and we were unable to recover it. 00:20:59.698 [2024-04-18 13:50:02.370172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.370357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.370379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.698 qpair failed and we were unable to recover it. 00:20:59.698 [2024-04-18 13:50:02.370548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.370737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.370764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.698 qpair failed and we were unable to recover it. 00:20:59.698 [2024-04-18 13:50:02.370952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.371105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.371132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.698 qpair failed and we were unable to recover it. 00:20:59.698 [2024-04-18 13:50:02.371297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.371581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.371625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.698 qpair failed and we were unable to recover it. 00:20:59.698 [2024-04-18 13:50:02.371757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.371901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.371928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.698 qpair failed and we were unable to recover it. 00:20:59.698 [2024-04-18 13:50:02.372114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.372354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.372377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.698 qpair failed and we were unable to recover it. 00:20:59.698 [2024-04-18 13:50:02.372563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.372749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.372792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.698 qpair failed and we were unable to recover it. 00:20:59.698 [2024-04-18 13:50:02.372990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.373198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.373225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.698 qpair failed and we were unable to recover it. 00:20:59.698 [2024-04-18 13:50:02.373413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.373659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.698 [2024-04-18 13:50:02.373707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.699 qpair failed and we were unable to recover it. 00:20:59.699 [2024-04-18 13:50:02.373876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.374015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.374052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.699 qpair failed and we were unable to recover it. 00:20:59.699 [2024-04-18 13:50:02.374242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.374470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.374497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.699 qpair failed and we were unable to recover it. 00:20:59.699 [2024-04-18 13:50:02.374690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.374933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.374960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.699 qpair failed and we were unable to recover it. 00:20:59.699 [2024-04-18 13:50:02.375149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.375408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.375436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.699 qpair failed and we were unable to recover it. 00:20:59.699 [2024-04-18 13:50:02.375615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.375869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.375896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.699 qpair failed and we were unable to recover it. 00:20:59.699 [2024-04-18 13:50:02.376082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.376266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.376295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.699 qpair failed and we were unable to recover it. 00:20:59.699 [2024-04-18 13:50:02.376497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.376706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.376737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.699 qpair failed and we were unable to recover it. 00:20:59.699 [2024-04-18 13:50:02.376897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.377062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.377089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.699 qpair failed and we were unable to recover it. 00:20:59.699 [2024-04-18 13:50:02.377238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.377360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.377388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.699 qpair failed and we were unable to recover it. 00:20:59.699 [2024-04-18 13:50:02.377555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.377737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.377760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.699 qpair failed and we were unable to recover it. 00:20:59.699 [2024-04-18 13:50:02.377980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.378124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.378151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.699 qpair failed and we were unable to recover it. 00:20:59.699 [2024-04-18 13:50:02.378340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.378532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.378559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.699 qpair failed and we were unable to recover it. 00:20:59.699 [2024-04-18 13:50:02.378746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.699 [2024-04-18 13:50:02.378874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.378912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.700 qpair failed and we were unable to recover it. 00:20:59.700 [2024-04-18 13:50:02.379094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.379325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.379357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.700 qpair failed and we were unable to recover it. 00:20:59.700 [2024-04-18 13:50:02.379518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.379700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.379727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.700 qpair failed and we were unable to recover it. 00:20:59.700 [2024-04-18 13:50:02.379919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.380091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.380118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.700 qpair failed and we were unable to recover it. 00:20:59.700 [2024-04-18 13:50:02.380263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.380396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.380423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.700 qpair failed and we were unable to recover it. 00:20:59.700 [2024-04-18 13:50:02.380578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.380801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.380831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.700 qpair failed and we were unable to recover it. 00:20:59.700 [2024-04-18 13:50:02.381037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.381183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.381207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.700 qpair failed and we were unable to recover it. 00:20:59.700 [2024-04-18 13:50:02.381376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.381573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.381600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.700 qpair failed and we were unable to recover it. 00:20:59.700 [2024-04-18 13:50:02.381846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.381991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.382018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.700 qpair failed and we were unable to recover it. 00:20:59.700 [2024-04-18 13:50:02.382214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.382374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.382402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.700 qpair failed and we were unable to recover it. 00:20:59.700 [2024-04-18 13:50:02.382591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.382745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.382772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.700 qpair failed and we were unable to recover it. 00:20:59.700 [2024-04-18 13:50:02.383026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.383200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.383228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.700 qpair failed and we were unable to recover it. 00:20:59.700 [2024-04-18 13:50:02.383399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.383553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.383589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.700 qpair failed and we were unable to recover it. 00:20:59.700 [2024-04-18 13:50:02.383804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.700 [2024-04-18 13:50:02.383981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.384008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.701 qpair failed and we were unable to recover it. 00:20:59.701 [2024-04-18 13:50:02.384150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.384285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.384313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.701 qpair failed and we were unable to recover it. 00:20:59.701 [2024-04-18 13:50:02.384510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.384669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.384696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.701 qpair failed and we were unable to recover it. 00:20:59.701 [2024-04-18 13:50:02.384839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.384983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.385010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.701 qpair failed and we were unable to recover it. 00:20:59.701 [2024-04-18 13:50:02.385172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.385322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.385350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.701 qpair failed and we were unable to recover it. 00:20:59.701 [2024-04-18 13:50:02.385514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.385673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.385709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.701 qpair failed and we were unable to recover it. 00:20:59.701 [2024-04-18 13:50:02.385910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.386085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.386112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.701 qpair failed and we were unable to recover it. 00:20:59.701 [2024-04-18 13:50:02.386325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.386477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.386504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.701 qpair failed and we were unable to recover it. 00:20:59.701 [2024-04-18 13:50:02.386687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.386819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.386860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.701 qpair failed and we were unable to recover it. 00:20:59.701 [2024-04-18 13:50:02.387045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.387212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.387248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.701 qpair failed and we were unable to recover it. 00:20:59.701 [2024-04-18 13:50:02.387400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.387582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.387609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.701 qpair failed and we were unable to recover it. 00:20:59.701 [2024-04-18 13:50:02.387841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.387979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.388006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.701 qpair failed and we were unable to recover it. 00:20:59.701 [2024-04-18 13:50:02.388188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.388337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.701 [2024-04-18 13:50:02.388364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.701 qpair failed and we were unable to recover it. 00:20:59.701 [2024-04-18 13:50:02.388544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.388746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.388773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.702 qpair failed and we were unable to recover it. 00:20:59.702 [2024-04-18 13:50:02.388965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.389136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.389163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.702 qpair failed and we were unable to recover it. 00:20:59.702 [2024-04-18 13:50:02.389415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.389594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.389621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.702 qpair failed and we were unable to recover it. 00:20:59.702 [2024-04-18 13:50:02.389861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.390040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.390067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.702 qpair failed and we were unable to recover it. 00:20:59.702 [2024-04-18 13:50:02.390225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.390392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.390429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.702 qpair failed and we were unable to recover it. 00:20:59.702 [2024-04-18 13:50:02.390699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.390866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.390893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.702 qpair failed and we were unable to recover it. 00:20:59.702 [2024-04-18 13:50:02.391052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.391212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.391240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.702 qpair failed and we were unable to recover it. 00:20:59.702 [2024-04-18 13:50:02.391460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.391595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.391622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.702 qpair failed and we were unable to recover it. 00:20:59.702 [2024-04-18 13:50:02.391790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.391925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.391952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.702 qpair failed and we were unable to recover it. 00:20:59.702 [2024-04-18 13:50:02.392108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.392247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.392275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.702 qpair failed and we were unable to recover it. 00:20:59.702 [2024-04-18 13:50:02.392387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.392578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.392616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.702 qpair failed and we were unable to recover it. 00:20:59.702 [2024-04-18 13:50:02.392842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.393004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.393031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.702 qpair failed and we were unable to recover it. 00:20:59.702 [2024-04-18 13:50:02.393181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.393306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.702 [2024-04-18 13:50:02.393333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.702 qpair failed and we were unable to recover it. 00:20:59.702 [2024-04-18 13:50:02.393541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.393729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.393756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.703 qpair failed and we were unable to recover it. 00:20:59.703 [2024-04-18 13:50:02.393993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.394204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.394232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.703 qpair failed and we were unable to recover it. 00:20:59.703 [2024-04-18 13:50:02.394403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.394659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.394686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.703 qpair failed and we were unable to recover it. 00:20:59.703 [2024-04-18 13:50:02.394835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.395069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.395107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.703 qpair failed and we were unable to recover it. 00:20:59.703 [2024-04-18 13:50:02.395268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.395436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.395463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.703 qpair failed and we were unable to recover it. 00:20:59.703 [2024-04-18 13:50:02.395643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.395883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.395911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.703 qpair failed and we were unable to recover it. 00:20:59.703 [2024-04-18 13:50:02.396094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.396275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.396305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.703 qpair failed and we were unable to recover it. 00:20:59.703 [2024-04-18 13:50:02.396584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.396830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.396857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.703 qpair failed and we were unable to recover it. 00:20:59.703 [2024-04-18 13:50:02.396993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.397118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.397145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.703 qpair failed and we were unable to recover it. 00:20:59.703 [2024-04-18 13:50:02.397322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.397562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.397589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.703 qpair failed and we were unable to recover it. 00:20:59.703 [2024-04-18 13:50:02.397803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.397927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.397954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.703 qpair failed and we were unable to recover it. 00:20:59.703 [2024-04-18 13:50:02.398118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.398267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.398295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.703 qpair failed and we were unable to recover it. 00:20:59.703 [2024-04-18 13:50:02.398467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.398699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.398726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.703 qpair failed and we were unable to recover it. 00:20:59.703 [2024-04-18 13:50:02.398901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.399022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.703 [2024-04-18 13:50:02.399055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.703 qpair failed and we were unable to recover it. 00:20:59.704 [2024-04-18 13:50:02.399246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.399425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.399457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.704 qpair failed and we were unable to recover it. 00:20:59.704 [2024-04-18 13:50:02.399710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.399842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.399869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.704 qpair failed and we were unable to recover it. 00:20:59.704 [2024-04-18 13:50:02.400095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.400280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.400308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.704 qpair failed and we were unable to recover it. 00:20:59.704 [2024-04-18 13:50:02.400481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.400683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.400717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.704 qpair failed and we were unable to recover it. 00:20:59.704 [2024-04-18 13:50:02.400935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.401079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.401106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.704 qpair failed and we were unable to recover it. 00:20:59.704 [2024-04-18 13:50:02.401224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.401371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.401398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.704 qpair failed and we were unable to recover it. 00:20:59.704 [2024-04-18 13:50:02.401566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.401734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.401760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.704 qpair failed and we were unable to recover it. 00:20:59.704 [2024-04-18 13:50:02.401951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.402207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.402236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.704 qpair failed and we were unable to recover it. 00:20:59.704 [2024-04-18 13:50:02.402404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.402569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.402595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.704 qpair failed and we were unable to recover it. 00:20:59.704 [2024-04-18 13:50:02.402758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.402939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.402970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.704 qpair failed and we were unable to recover it. 00:20:59.704 [2024-04-18 13:50:02.403087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.403229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.403253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.704 qpair failed and we were unable to recover it. 00:20:59.704 [2024-04-18 13:50:02.403391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.403536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.403563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.704 qpair failed and we were unable to recover it. 00:20:59.704 [2024-04-18 13:50:02.403721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.403866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.403893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.704 qpair failed and we were unable to recover it. 00:20:59.704 [2024-04-18 13:50:02.404066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.404230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.404253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.704 qpair failed and we were unable to recover it. 00:20:59.704 [2024-04-18 13:50:02.404441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.404640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.704 [2024-04-18 13:50:02.404667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.704 qpair failed and we were unable to recover it. 00:20:59.704 [2024-04-18 13:50:02.404898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.405045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.405072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.705 qpair failed and we were unable to recover it. 00:20:59.705 [2024-04-18 13:50:02.405259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.405428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.405463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.705 qpair failed and we were unable to recover it. 00:20:59.705 [2024-04-18 13:50:02.405611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.405785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.405812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.705 qpair failed and we were unable to recover it. 00:20:59.705 [2024-04-18 13:50:02.405949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.406104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.406131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.705 qpair failed and we were unable to recover it. 00:20:59.705 [2024-04-18 13:50:02.406337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.406496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.406528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.705 qpair failed and we were unable to recover it. 00:20:59.705 [2024-04-18 13:50:02.406653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.406834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.406861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.705 qpair failed and we were unable to recover it. 00:20:59.705 [2024-04-18 13:50:02.407017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.407165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.407200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.705 qpair failed and we were unable to recover it. 00:20:59.705 [2024-04-18 13:50:02.407422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.407600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.407627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.705 qpair failed and we were unable to recover it. 00:20:59.705 [2024-04-18 13:50:02.407772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.407954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.407981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.705 qpair failed and we were unable to recover it. 00:20:59.705 [2024-04-18 13:50:02.408147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.408304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.408331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.705 qpair failed and we were unable to recover it. 00:20:59.705 [2024-04-18 13:50:02.408615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.408776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.705 [2024-04-18 13:50:02.408803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.705 qpair failed and we were unable to recover it. 00:20:59.705 [2024-04-18 13:50:02.408971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.409196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.409234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.706 qpair failed and we were unable to recover it. 00:20:59.706 [2024-04-18 13:50:02.409360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.409551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.409578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.706 qpair failed and we were unable to recover it. 00:20:59.706 [2024-04-18 13:50:02.409763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.409896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.409933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.706 qpair failed and we were unable to recover it. 00:20:59.706 [2024-04-18 13:50:02.410118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.410304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.410342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.706 qpair failed and we were unable to recover it. 00:20:59.706 [2024-04-18 13:50:02.410534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.410679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.410706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.706 qpair failed and we were unable to recover it. 00:20:59.706 [2024-04-18 13:50:02.410875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.411035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.411073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.706 qpair failed and we were unable to recover it. 00:20:59.706 [2024-04-18 13:50:02.411214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.411361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.411388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.706 qpair failed and we were unable to recover it. 00:20:59.706 [2024-04-18 13:50:02.411565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.411700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.411727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.706 qpair failed and we were unable to recover it. 00:20:59.706 [2024-04-18 13:50:02.411910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.412060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.412084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.706 qpair failed and we were unable to recover it. 00:20:59.706 [2024-04-18 13:50:02.412337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.412505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.412529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.706 qpair failed and we were unable to recover it. 00:20:59.706 [2024-04-18 13:50:02.412670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.412832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.412857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.706 qpair failed and we were unable to recover it. 00:20:59.706 [2024-04-18 13:50:02.413037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.413205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.413233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.706 qpair failed and we were unable to recover it. 00:20:59.706 [2024-04-18 13:50:02.413426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.413650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.413680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.706 qpair failed and we were unable to recover it. 00:20:59.706 [2024-04-18 13:50:02.413862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.414022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.414048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.706 qpair failed and we were unable to recover it. 00:20:59.706 [2024-04-18 13:50:02.414249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.414401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.706 [2024-04-18 13:50:02.414437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.414614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.414831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.414856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.415026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.415203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.415228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.415407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.415624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.415648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.415827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.415966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.415990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.416129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.416372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.416400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.416590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.416821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.416846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.416959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.417130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.417157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.417325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.417494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.417521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.417698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.417959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.417992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.418181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.418366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.418402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.418528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.418694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.418721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.418933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.419085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.419109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.419299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.419492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.419516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.419687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.419903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.419929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.420107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.420247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.420272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.420387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.420532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.707 [2024-04-18 13:50:02.420556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.707 qpair failed and we were unable to recover it. 00:20:59.707 [2024-04-18 13:50:02.420782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.420957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.420981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.708 qpair failed and we were unable to recover it. 00:20:59.708 [2024-04-18 13:50:02.421216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.421393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.421418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.708 qpair failed and we were unable to recover it. 00:20:59.708 [2024-04-18 13:50:02.421617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.421831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.421858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.708 qpair failed and we were unable to recover it. 00:20:59.708 [2024-04-18 13:50:02.422047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.422205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.422237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.708 qpair failed and we were unable to recover it. 00:20:59.708 [2024-04-18 13:50:02.422412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.422590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.422616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.708 qpair failed and we were unable to recover it. 00:20:59.708 [2024-04-18 13:50:02.422850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.423028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.423055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.708 qpair failed and we were unable to recover it. 00:20:59.708 [2024-04-18 13:50:02.423243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.423432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.423466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.708 qpair failed and we were unable to recover it. 00:20:59.708 [2024-04-18 13:50:02.423647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.423855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.423879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.708 qpair failed and we were unable to recover it. 00:20:59.708 [2024-04-18 13:50:02.424055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.424218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.424246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.708 qpair failed and we were unable to recover it. 00:20:59.708 [2024-04-18 13:50:02.424426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.424607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.424634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.708 qpair failed and we were unable to recover it. 00:20:59.708 [2024-04-18 13:50:02.424873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.425043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.425069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.708 qpair failed and we were unable to recover it. 00:20:59.708 [2024-04-18 13:50:02.425226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.425370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.425397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.708 qpair failed and we were unable to recover it. 00:20:59.708 [2024-04-18 13:50:02.425545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.425730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.425756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.708 qpair failed and we were unable to recover it. 00:20:59.708 [2024-04-18 13:50:02.425927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.426130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.426162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.708 qpair failed and we were unable to recover it. 00:20:59.708 [2024-04-18 13:50:02.426348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.426492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.708 [2024-04-18 13:50:02.426519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.708 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.426672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.426825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.426852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.427090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.427208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.427233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.427423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.427608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.427635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.427821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.428042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.428069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.428270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.428422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.428447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.428610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.428838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.428863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.429051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.429228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.429268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.429522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.429734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.429759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.429974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.430199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.430223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.430366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.430570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.430599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.430748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.430904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.430928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.431230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.431443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.431471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.431627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.431846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.431873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.432052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.432170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.432201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.432377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.432572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.432599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.432860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.433010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.433037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.433217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.433481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.433511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.433729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.433900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.709 [2024-04-18 13:50:02.433927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.709 qpair failed and we were unable to recover it. 00:20:59.709 [2024-04-18 13:50:02.434073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.434209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.434236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.710 qpair failed and we were unable to recover it. 00:20:59.710 [2024-04-18 13:50:02.434422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.434695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.434722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.710 qpair failed and we were unable to recover it. 00:20:59.710 [2024-04-18 13:50:02.434848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.435058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.435085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.710 qpair failed and we were unable to recover it. 00:20:59.710 [2024-04-18 13:50:02.435245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.435408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.435436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.710 qpair failed and we were unable to recover it. 00:20:59.710 [2024-04-18 13:50:02.435657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.435804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.435831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.710 qpair failed and we were unable to recover it. 00:20:59.710 [2024-04-18 13:50:02.435990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.436200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.436228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.710 qpair failed and we were unable to recover it. 00:20:59.710 [2024-04-18 13:50:02.436426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.436637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.436664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.710 qpair failed and we were unable to recover it. 00:20:59.710 [2024-04-18 13:50:02.436870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.437116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.437140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.710 qpair failed and we were unable to recover it. 00:20:59.710 [2024-04-18 13:50:02.437353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.437570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.437594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.710 qpair failed and we were unable to recover it. 00:20:59.710 [2024-04-18 13:50:02.437876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.438096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.438120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.710 qpair failed and we were unable to recover it. 00:20:59.710 [2024-04-18 13:50:02.438264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.438487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.438517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.710 qpair failed and we were unable to recover it. 00:20:59.710 [2024-04-18 13:50:02.438713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.438891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.438918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.710 qpair failed and we were unable to recover it. 00:20:59.710 [2024-04-18 13:50:02.439062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.439224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.439253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.710 qpair failed and we were unable to recover it. 00:20:59.710 [2024-04-18 13:50:02.439482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.439645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.439670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.710 qpair failed and we were unable to recover it. 00:20:59.710 [2024-04-18 13:50:02.439929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.440116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.440143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.710 qpair failed and we were unable to recover it. 00:20:59.710 [2024-04-18 13:50:02.440346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.440563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.440590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.710 qpair failed and we were unable to recover it. 00:20:59.710 [2024-04-18 13:50:02.440754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.710 [2024-04-18 13:50:02.440985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.441020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.711 qpair failed and we were unable to recover it. 00:20:59.711 [2024-04-18 13:50:02.441293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.441496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.441531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.711 qpair failed and we were unable to recover it. 00:20:59.711 [2024-04-18 13:50:02.441738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.441915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.441941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.711 qpair failed and we were unable to recover it. 00:20:59.711 [2024-04-18 13:50:02.442144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.442323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.442361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.711 qpair failed and we were unable to recover it. 00:20:59.711 [2024-04-18 13:50:02.442583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.442780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.442830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.711 qpair failed and we were unable to recover it. 00:20:59.711 [2024-04-18 13:50:02.442970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.443171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.443210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.711 qpair failed and we were unable to recover it. 00:20:59.711 [2024-04-18 13:50:02.443430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.443623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.443649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.711 qpair failed and we were unable to recover it. 00:20:59.711 [2024-04-18 13:50:02.443874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.444046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.444073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.711 qpair failed and we were unable to recover it. 00:20:59.711 [2024-04-18 13:50:02.444234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.444419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.444447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.711 qpair failed and we were unable to recover it. 00:20:59.711 [2024-04-18 13:50:02.444682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.444831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.444855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.711 qpair failed and we were unable to recover it. 00:20:59.711 [2024-04-18 13:50:02.445002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.445165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.445194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.711 qpair failed and we were unable to recover it. 00:20:59.711 [2024-04-18 13:50:02.445385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.445573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.445597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.711 qpair failed and we were unable to recover it. 00:20:59.711 [2024-04-18 13:50:02.445765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.445937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.445961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.711 qpair failed and we were unable to recover it. 00:20:59.711 [2024-04-18 13:50:02.446128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.446298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.446323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.711 qpair failed and we were unable to recover it. 00:20:59.711 [2024-04-18 13:50:02.446532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.446705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.711 [2024-04-18 13:50:02.446732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.711 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.446942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.447118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.447143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.447285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.447411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.447436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.447613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.447745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.447772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.448047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.448250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.448275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.448417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.448535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.448559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.448688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.448863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.448890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.449118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.449292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.449317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.449444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.449619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.449658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.449797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.449949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.449976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.450206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.450455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.450479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.450650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.450786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.450814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.451026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.451207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.451235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.451463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.451630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.451655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.451813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.451950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.451977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.452126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.452298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.452325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.452521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.452749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.452773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.452975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.453151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.453185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.453399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.453577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.453604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.453756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.454004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.454035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.454223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.454437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.454464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.454670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.454866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.454893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.455078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.455288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.455329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.712 qpair failed and we were unable to recover it. 00:20:59.712 [2024-04-18 13:50:02.455488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.455729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.712 [2024-04-18 13:50:02.455756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.713 qpair failed and we were unable to recover it. 00:20:59.713 [2024-04-18 13:50:02.455980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.713 [2024-04-18 13:50:02.456151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.713 [2024-04-18 13:50:02.456184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.713 qpair failed and we were unable to recover it. 00:20:59.987 [2024-04-18 13:50:02.456348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.987 [2024-04-18 13:50:02.456520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.987 [2024-04-18 13:50:02.456545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.987 qpair failed and we were unable to recover it. 00:20:59.987 [2024-04-18 13:50:02.456762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.987 [2024-04-18 13:50:02.456905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.987 [2024-04-18 13:50:02.456930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.987 qpair failed and we were unable to recover it. 00:20:59.987 [2024-04-18 13:50:02.457084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.987 [2024-04-18 13:50:02.457273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.987 [2024-04-18 13:50:02.457300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.987 qpair failed and we were unable to recover it. 00:20:59.987 [2024-04-18 13:50:02.457521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.987 [2024-04-18 13:50:02.457641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.987 [2024-04-18 13:50:02.457668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.987 qpair failed and we were unable to recover it. 00:20:59.987 [2024-04-18 13:50:02.457871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.987 [2024-04-18 13:50:02.458082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.458109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.458277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.458431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.458458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.458689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.458917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.458944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.459127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.459388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.459424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.459613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.459796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.459823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.459998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.460182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.460222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.460418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.460623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.460650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.460804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.460992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.461046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.461253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.461418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.461446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.461592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.461720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.461748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.461935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.462113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.462140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.462288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.462465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.462490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.462638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.462828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.462857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.463182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.463348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.463380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.463530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.463722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.463750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.463985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.464145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.464172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.464396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.464642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.464678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.464872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.465101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.465127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.465326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.465453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.465477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.465676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.465804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.465828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.465959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.466189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.466215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.466417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.466628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.466653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.466834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.467043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.467092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.467243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.467432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.467471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.467648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.467841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.467874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.468075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.468275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.468303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.468511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.468653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.468681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.468931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.469084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.469111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.469356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.469544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.988 [2024-04-18 13:50:02.469571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.988 qpair failed and we were unable to recover it. 00:20:59.988 [2024-04-18 13:50:02.469736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.469911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.469950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.470154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.470386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.470413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.470585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.470754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.470801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.471069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.471249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.471274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.471486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.471720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.471747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.471932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.472117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.472144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.472433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.472627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.472654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.472855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.473011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.473038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.473257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.473454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.473481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.473617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.473740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.473779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.473930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.474081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.474105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.474390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.474567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.474591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.474808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.474959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.474983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.475198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.475407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.475432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.475639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.475857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.475884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.476086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.476308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.476334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.476535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.476747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.476788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.476947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.477161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.477193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.477495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.477738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.477765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.478056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.478244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.478272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.478494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.478718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.478774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.478965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.479165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.479209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.479477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.479813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.479840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.479994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.480213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.480252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.480400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.480556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.480580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.480818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.480977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.481002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.481150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.481382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.481416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.481613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.481760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.481785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.989 [2024-04-18 13:50:02.482043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.482284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.989 [2024-04-18 13:50:02.482312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.989 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.482445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.482569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.482595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.482767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.483027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.483054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.483257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.483405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.483432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.483613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.483769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.483796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.483991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.484146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.484170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.484334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.484517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.484543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.484704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.484924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.484955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.485197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.485351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.485377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.485575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.485790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.485816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.486040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.486209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.486237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.486394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.486539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.486563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.486729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.486913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.486937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.487084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.487201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.487226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.487365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.487484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.487516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.487723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.487915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.487942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.488103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.488296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.488324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.488469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.488624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.488652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.488789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.488924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.488948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.489120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.489265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.489290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.489435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.489666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.489693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.489862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.490044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.490071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.490225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.490372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.490399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.490596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.490735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.490759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.490912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.491109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.491133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.491255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.491396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.491421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.491651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.491826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.491853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.492011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.492232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.492260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.492484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.492652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.492679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.990 qpair failed and we were unable to recover it. 00:20:59.990 [2024-04-18 13:50:02.492857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.990 [2024-04-18 13:50:02.493053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.493080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.493204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.493338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.493366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.493541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.493722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.493757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.493943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.494131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.494168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.494357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.494491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.494515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.494828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.494956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.494983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.495161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.495323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.495348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.495518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.495636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.495663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.495839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.495982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.496009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.496156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.496338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.496380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.496596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.496761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.496788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.496947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.497112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.497138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.497298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.497480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.497503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.497756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.497925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.497960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.498141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.498359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.498387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.498599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.498761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.498797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.498980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.499227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.499256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.499423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.499647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.499682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.499945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.500128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.500155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.500349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.500548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.500575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.500844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.501016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.501042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.501227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.501434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.501461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.501653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.501843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.501869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.502011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.502212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.502240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.502429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.502655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.502681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.502867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.502987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.503013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.503181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.503410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.503437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.503626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.503841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.503876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.504135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.504323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.504361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.991 qpair failed and we were unable to recover it. 00:20:59.991 [2024-04-18 13:50:02.504563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.991 [2024-04-18 13:50:02.504742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.504769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.505120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.505366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.505394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.505616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.505797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.505824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.506012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.506214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.506242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.506493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.506700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.506726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.506893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.507023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.507059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.507238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.507447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.507474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.507621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.507753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.507775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.507940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.508112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.508139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.508287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.508433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.508459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.508565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.508710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.508735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.508859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.509018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.509045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.509282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.509465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.509492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.509722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.509898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.509925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.510148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.510297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.510324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.510474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.510721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.510765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.510948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.511054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.511076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.511245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.511493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.511519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.511707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.511921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.511948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.512155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.512337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.512364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.512615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.512768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.512801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.512965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.513109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.513136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.513290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.513429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.513452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.992 [2024-04-18 13:50:02.513588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.513744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.992 [2024-04-18 13:50:02.513770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.992 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.514018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.514197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.514230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.514385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.514536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.514558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.514724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.514871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.514898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.515078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.515234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.515262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.515549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.515669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.515695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.515861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.515973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.516000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.516145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.516311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.516339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.516548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.516709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.516736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.516920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.517097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.517124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.517293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.517459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.517485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.517614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.517762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.517783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.517922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.518102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.518129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.518329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.518461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.518488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.518647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.518816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.518857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.519115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.519247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.519275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.519461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.519592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.519619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.519980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.520102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.520129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.520296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.520503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.520526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.520722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.520912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.520939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.521172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.521399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.521426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.521623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.521771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.521798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.522047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.522214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.522242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.522426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.522649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.522676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.522921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.523098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.523125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.523377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.523589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.523617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.523786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.523969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.523996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.524280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.524443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.524470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.524595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.524733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.524760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.993 qpair failed and we were unable to recover it. 00:20:59.993 [2024-04-18 13:50:02.524883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.993 [2024-04-18 13:50:02.525028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.525050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.525231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.525464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.525491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.525649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.525836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.525864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.526082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.526245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.526273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.526456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.526707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.526734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.526940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.527127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.527154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.527281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.527404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.527427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.527608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.527788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.527826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.528014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.528167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.528201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.528421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.528643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.528680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.528924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.529138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.529165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.529305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.529534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.529566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.529721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.529887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.529923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.530183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.530337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.530364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.530608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.530761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.530788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.531001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.531141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.531168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.531306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.531463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.531490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.531622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.531791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.531818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.531990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.532141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.532183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.532379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.532574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.532601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.532816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.533023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.533050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.533259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.533403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.533431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.533654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.533807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.533841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.533982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.534125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.534152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.534279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.534385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.534409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.534579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.534698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.534725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.534876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.535092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.535119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.535435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.535671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.535698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.535822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.535970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.994 [2024-04-18 13:50:02.535998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.994 qpair failed and we were unable to recover it. 00:20:59.994 [2024-04-18 13:50:02.536217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.536422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.536449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.536678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.536852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.536879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.537043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.537190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.537218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.537384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.537553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.537579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.537713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.537931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.537968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.538161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.538330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.538369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.538574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.538693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.538720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.538873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.539035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.539074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.539360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.539555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.539583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.539717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.539936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.539963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.540111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.540247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.540271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.540458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.540631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.540658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.540818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.541103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.541130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.541296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.541470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.541506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.541740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.541978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.542005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.542324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.542481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.542508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.542691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.542870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.542897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.543172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.543384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.543411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.543549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.543746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.543776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.543966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.544209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.544248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.544516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.544630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.544656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.544807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.545013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.545040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.545269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.545480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.545516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.545702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.545877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.545909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.546084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.546362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.546390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.546545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.546746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.546773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.546974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.547202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.547230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.547371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.547528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.547555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.995 [2024-04-18 13:50:02.547698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.547831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.995 [2024-04-18 13:50:02.547854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.995 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.547980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.548163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.548196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.548332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.548553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.548580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.548840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.548987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.549018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.549174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.549305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.549332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.549511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.549667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.549694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.549882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.550020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.550061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.550203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.550361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.550388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.550535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.550713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.550740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.550938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.551094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.551120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.551273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.551481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.551503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.551721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.551933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.551960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.552159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.552316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.552344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.552537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.552690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.552721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.552889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.553051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.553078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.553270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.553437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.553464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.553690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.553852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.553902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.554124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.554382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.554409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.554731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.554923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.554950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.555116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.555286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.555323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.555565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.555698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.555725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.555876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.556011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.556033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.556211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.556399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.556426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.556618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.556826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.556853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.557058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.557298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.557326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.557515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.557697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.557724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.558000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.558153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.558193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.558392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.558610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.558637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.558802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.558974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.996 [2024-04-18 13:50:02.559001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.996 qpair failed and we were unable to recover it. 00:20:59.996 [2024-04-18 13:50:02.559137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.559290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.559317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.559451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.559687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.559709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.559933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.560110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.560137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.560320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.560499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.560527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.560740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.560890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.560917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.561123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.561332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.561361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.561555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.561713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.561740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.561902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.562060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.562099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.562281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.562496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.562550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.562772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.562981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.563008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.563214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.563404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.563431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.563614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.563865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.563892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.564074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.564240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.564268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.564492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.564618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.564645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.564802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.564971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.565002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.565243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.565426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.565453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.565629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.565868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.565896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.566187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.566380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.566411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.566611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.566785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.566812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.567011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.567171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.567210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.567430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.567553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.567580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.567716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.567887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.567914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.568118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.568366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.568400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.568600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.568889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.568939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.997 [2024-04-18 13:50:02.569082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.569237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.997 [2024-04-18 13:50:02.569266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.997 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.569425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.569618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.569645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.569856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.570057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.570091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.570295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.570433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.570461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.570639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.570863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.570890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.571046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.571200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.571228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.571403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.571548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.571575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.571725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.571891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.571929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.572068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.572243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.572270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.572488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.572669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.572700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.572883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.573075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.573117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.573274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.573414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.573445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.573648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.573883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.573910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.574084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.574233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.574271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.574450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.574665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.574693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.574846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.575016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.575043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.575247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.575417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.575453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.575709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.575864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.575891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.576060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.576208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.576236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.576380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.576544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.576583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.576811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.576967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.576994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.577209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.577437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.577464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.577648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.577772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.577794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.577981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.578152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.578192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.578333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.578504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.578531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.578750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.578954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.578981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.579189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.579361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.579388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.579639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.579802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.579829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.580004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.580200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.580223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.998 [2024-04-18 13:50:02.580535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.580711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.998 [2024-04-18 13:50:02.580741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.998 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.580930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.581082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.581110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.581288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.581427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.581467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.581635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.581770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.581801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.581979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.582156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.582190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.582429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.582632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.582659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.582852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.583027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.583054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.583262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.583434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.583461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.583657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.583854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.583888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.584098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.584283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.584312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.584529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.584769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.584796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.584960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.585064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.585087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.585356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.585553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.585593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.585802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.586004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.586031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.586180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.586339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.586378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.586634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.586806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.586833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.586986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.587185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.587213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.587382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.587528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.587567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.587766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.587978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.588005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.588208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.588416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.588443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.588659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.588764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.588800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.588947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.589088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.589115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.589274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.589441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.589469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.589596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.589871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.589898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.590089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.590267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.590305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.590491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.590677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.590704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.590969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.591117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.591144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.591337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.591541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.591568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.591771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.591926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.591953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:20:59.999 qpair failed and we were unable to recover it. 00:20:59.999 [2024-04-18 13:50:02.592140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.592314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.999 [2024-04-18 13:50:02.592342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.592551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.592759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.592786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.592946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.593108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.593135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.593328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.593570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.593597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.593724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.593875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.593906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.594067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.594192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.594221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.594448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.594617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.594644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.594799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.595024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.595058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.595275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.595458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.595485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.595644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.595846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.595872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.596028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.596165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.596207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.596419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.596602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.596629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.596900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.597046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.597079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.597267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.597454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.597490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.597704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.597948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.597975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.598296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.598467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.598494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.598675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.598830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.598857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.599036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.599233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.599261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.599518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.599665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.599691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.599868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.600002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.600029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.600316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.600577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.600604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.600787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.601009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.601036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.601201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.601456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.601483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.601738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.601942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.601968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.602215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.602353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.602380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.602569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.602799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.602825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.603026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.603235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.603263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.603551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.603723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.603750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.603961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.604161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.604207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.000 qpair failed and we were unable to recover it. 00:21:00.000 [2024-04-18 13:50:02.604476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.000 [2024-04-18 13:50:02.604659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.604686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.604913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.605142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.605169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.605401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.605559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.605587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.605769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.605965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.605992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.606180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.606396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.606423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.606594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.606770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.606797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.606990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.607126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.607153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.607343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.607521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.607559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.607701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.607828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.607856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.607989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.608123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.608150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.608327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.608500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.608527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.608747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.608953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.608980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.609199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.609357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.609384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.609542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.609701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.609740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.609863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.610029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.610055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.610283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.610454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.610481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.610647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.610834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.610861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.611001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.611180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.611207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.611388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.611518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.611545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.611736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.611971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.611998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.612207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.612364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.612390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.612583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.612810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.612836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.613003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.613279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.613307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.613487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.613637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.613664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.613814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.614011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.614038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.001 [2024-04-18 13:50:02.614266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.614439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.001 [2024-04-18 13:50:02.614466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.001 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.614647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.614782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.614813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.615062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.615199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.615227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.615463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.615634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.615665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.615870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.616053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.616079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.616293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.616523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.616550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.616738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.616973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.617000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.617162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.617391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.617420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.617752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.617938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.617997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.618153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.618449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.618476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.618645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.618909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.618935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.619197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.619382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.619413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.619623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.619810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.619837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.619984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.620125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.620152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.620320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.620494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.620522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.620696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.620911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.620960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.621122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.621323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.621362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.621651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.621818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.621845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.622012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.622135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.622157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.622331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.622480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.622517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.622700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.622856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.622883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.623066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.623210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.623233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.623405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.623541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.623567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.623808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.623958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.623985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.624142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.624340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.624368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.624517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.624672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.624699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.624905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.625124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.625151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.625369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.625517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.625544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.625690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.625817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.625844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.002 qpair failed and we were unable to recover it. 00:21:00.002 [2024-04-18 13:50:02.626069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.626258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.002 [2024-04-18 13:50:02.626286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.626561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.626743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.626770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.627044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.627202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.627233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.627354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.627554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.627581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.627738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.627875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.627897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.628048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.628239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.628267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.628482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.628652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.628679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.628871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.629190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.629218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.629443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.629618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.629645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.629809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.629993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.630019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.630259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.630477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.630508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.630709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.630890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.630917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.631077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.631244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.631272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.631510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.631675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.631702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.631990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.632196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.632232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.632456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.632570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.632596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.632770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.632949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.632976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.633173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.633327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.633355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.633479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.633648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.633675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.633842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.633974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.633996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.634168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.634339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.634367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.634552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.634787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.634813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.635061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.635188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.635227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.635349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.635488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.635515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.635683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.635848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.635875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.636136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.636319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.636362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.636510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.636687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.636713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.636891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.637040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.637066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.637363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.637596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.637623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.003 qpair failed and we were unable to recover it. 00:21:00.003 [2024-04-18 13:50:02.637780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.003 [2024-04-18 13:50:02.637930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.637957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.638150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.638354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.638382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.638538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.638687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.638709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.638896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.639022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.639048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.639255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.639479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.639510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.639715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.640039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.640066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.640213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.640418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.640452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.640605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.640784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.640810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.641113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.641274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.641301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.641489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.641649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.641675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.641811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.642046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.642083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.642289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.642454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.642476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.642651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.642857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.642884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.643030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.643182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.643209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.643451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.643654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.643682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.643879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.644104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.644132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.644351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.644590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.644617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.644834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.644972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.644998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.645120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.645274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.645301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.645544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.645683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.645737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.645940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.646096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.646122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.646266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.646440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.646477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.646690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.646918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.646945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.647125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.647317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.647350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.647494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.647673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.647699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.647918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.648078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.648116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.648280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.648411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.648434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.648602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.648787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.648845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.649006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.649190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.649219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.004 qpair failed and we were unable to recover it. 00:21:00.004 [2024-04-18 13:50:02.649361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.004 [2024-04-18 13:50:02.649563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.649589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.649719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.649899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.649925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.650080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.650200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.650228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.650374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.650519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.650547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.650738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.650886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.650912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.651073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.651252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.651280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.651497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.651687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.651714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.651872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.652034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.652060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.652271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.652453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.652480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.652653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.652810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.652849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.652996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.653189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.653217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.653401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.653546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.653572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.653716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.653853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.653875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.654067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.654261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.654289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.654528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.654687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.654723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.654907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.655080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.655106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.655289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.655454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.655506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.655688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.655796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.655822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.655986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.656166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.656210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.656364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.656509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.656535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.656728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.656850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.656876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.657061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.657212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.657253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.657452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.657611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.657637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.657925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.658061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.658088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.658292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.658496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.658534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.658651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.658821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.658848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.659008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.659182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.659224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.659382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.659582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.659608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.659757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.659943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.659969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.005 qpair failed and we were unable to recover it. 00:21:00.005 [2024-04-18 13:50:02.660143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.005 [2024-04-18 13:50:02.660338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.660367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.660553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.660804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.660835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.660990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.661109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.661136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.661291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.661528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.661554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.661716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.661879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.661904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.662048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.662206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.662234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.662382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.662533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.662559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.662694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.662845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.662885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.663063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.663251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.663280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.663456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.663659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.663683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.663841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.664012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.664036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.664283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.664422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.664446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.664616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.664796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.664834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.664994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.665149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.665172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.665329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.665468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.665492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.665680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.665862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.665890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.666125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.666300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.666325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.666500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.666642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.666666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.666809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.666976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.667002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.667192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.667342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.667375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.667553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.667734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.667761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.667911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.668028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.668054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.668241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.668461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.668489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.668654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.668792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.668818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.668982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.669187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.669214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.006 qpair failed and we were unable to recover it. 00:21:00.006 [2024-04-18 13:50:02.669351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.006 [2024-04-18 13:50:02.669520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.669544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.669678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.669889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.669913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.670074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.670297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.670330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.670528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.670678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.670703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.670868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.671000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.671023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.671205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.671343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.671366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.671503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.671620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.671645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.671778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.671944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.671969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.672168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.672382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.672406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.672653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.672898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.672934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.673148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.673325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.673353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.673490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.673617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.673644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.673785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.673978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.674004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.674195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.674403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.674430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.674637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.674805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.674832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.674979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.675173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.675203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.675450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.675601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.675628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.675772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.676008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.676035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.676231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.676471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.676498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.676678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.676855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.676882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.677047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.677306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.677342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.677550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.677721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.677748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.677934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.678157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.678190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.678345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.678491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.678522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.678811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.679043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.679073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.679318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.679522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.679549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.679694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.679865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.679892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.680035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.680232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.680266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.007 [2024-04-18 13:50:02.680495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.680686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.007 [2024-04-18 13:50:02.680719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.007 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.680919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.681096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.681120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.681282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.681451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.681476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.681628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.681856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.681881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.682071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.682315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.682352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.682539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.682684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.682711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.682869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.683077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.683103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.683264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.683472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.683499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.683691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.683919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.683943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.684048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.684205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.684230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.684369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.684513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.684537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.684706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.684884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.684911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.685141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.685326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.685353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.685518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.685735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.685762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.685955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.686190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.686222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.686376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.686562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.686603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.686891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.687022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.687048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.687391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.687584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.687610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.687838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.687996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.688023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.688212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.688421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.688452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.688640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.688814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.688852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.689067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.689262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.689290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.689438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.689593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.689619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.689821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.690041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.690074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.690270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.690447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.690474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.690584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.690800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.690827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.691100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.691262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.691287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.691499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.691734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.691761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.691899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.692033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.692059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.008 qpair failed and we were unable to recover it. 00:21:00.008 [2024-04-18 13:50:02.692236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.692381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.008 [2024-04-18 13:50:02.692405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.692556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.692704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.692730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.693000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.693156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.693193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.693393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.693560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.693591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.693874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.694074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.694101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.694374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.694559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.694586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.694810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.695067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.695094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.695278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.695473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.695499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.695720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.695894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.695921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.696084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.696315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.696340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.696508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.696656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.696680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.696841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.697008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.697034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.697200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.697335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.697360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.697536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.697756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.697783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.697925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.698105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.698132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.698286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.698468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.698496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.698647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.698800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.698826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.699014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.699227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.699264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.699427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.699602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.699626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.699805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.699983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.700007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.700264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.700452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.700479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.700724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.700874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.700900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.701092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.701282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.701310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.701474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.701618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.701645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.701824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.702010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.702035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.702270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.702436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.702460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.702642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.702848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.702875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.703041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.703214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.703244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.703474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.703649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.009 [2024-04-18 13:50:02.703675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.009 qpair failed and we were unable to recover it. 00:21:00.009 [2024-04-18 13:50:02.703945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.704124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.704151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.704399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.704551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.704576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.704698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.704869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.704893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.705035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.705219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.705246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.705444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.705600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.705624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.705814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.706029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.706056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.706213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.706393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.706420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.706597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.706750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.706777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.706924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.707108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.707133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.707385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.707538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.707566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.707732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.707874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.707898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.708059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.708208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.708235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.708398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.708537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.708564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.708755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.708914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.708940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.709116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.709361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.709400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.709528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.709700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.709727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.709894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.710050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.710089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.710293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.710440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.710466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.710642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.710917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.710965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.711188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.711373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.711398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.711610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.711753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.711780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.711967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.712113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.712140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.712302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.712565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.712596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.712701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.712831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.712854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.713011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.713235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.010 [2024-04-18 13:50:02.713262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.010 qpair failed and we were unable to recover it. 00:21:00.010 [2024-04-18 13:50:02.713387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.713561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.713585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.713732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.713878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.713902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.714064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.714253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.714283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.714459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.714620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.714643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.714821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.714963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.714988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.715158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.715391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.715418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.715596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.715771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.715796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.715932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.716096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.716119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.716292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.716431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.716458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.716673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.716873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.716897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.717080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.717200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.717237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.717479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.717638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.717665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.717834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.718033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.718057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.718192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.718335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.718360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.718555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.718740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.718772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.718961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.719146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.719171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.719305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.719488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.719516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.719713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.719936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.719963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.720194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.720346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.720370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.720520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.720654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.720693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.720858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.721079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.721111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.721288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.721403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.721428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.721538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.721655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.721679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.721840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.722024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.722051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.722262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.722440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.722468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.722710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.722885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.722912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.723074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.723262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.723290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.723543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.723695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.723718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.723981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.724163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.724196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.011 qpair failed and we were unable to recover it. 00:21:00.011 [2024-04-18 13:50:02.724330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.011 [2024-04-18 13:50:02.724507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.724534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.724756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.724927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.724953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.725164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.725370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.725404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.725589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.725732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.725759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.725917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.726025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.726047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.726202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.726437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.726464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.726630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.726815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.726841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.727017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.727183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.727207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.727415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.727594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.727621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.727881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.728037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.728064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.728227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.728389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.728425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.728602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.728743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.728769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.728912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.729080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.729106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.729320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.729511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.729538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.729726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.729952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.729979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.730148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.730335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.730361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.730572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.730738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.730775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.730961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.731157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.731190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.731342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.731580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.731617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.731803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.732040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.732067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.732263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.732445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.732472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.732722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.732904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.732930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.733113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.733308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.733336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.733550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.733702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.733728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.733878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.734017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.734043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.734172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.734397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.734430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.734580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.734756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.734783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.735036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.735199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.735227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.735481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.735649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.735685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.735832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.735984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.736011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.736189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.736361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.736396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.012 qpair failed and we were unable to recover it. 00:21:00.012 [2024-04-18 13:50:02.736586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.012 [2024-04-18 13:50:02.736766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.736801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.736922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.737111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.737137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.737319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.737556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.737583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.737750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.737923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.737945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.738159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.738312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.738339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.738550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.738727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.738754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.738884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.739120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.739147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.739329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.739544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.739571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.739717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.739876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.739903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.740094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.740281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.740324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.740469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.740651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.740677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.740800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.741018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.741045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.741206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.741395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.741422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.741580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.741820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.741857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.742020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.742182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.742209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.742392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.742541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.742566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.742756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.742976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.743002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.743157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.743311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.743337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.743535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.743697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.743728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.743882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.744031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.744057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.744242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.744449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.744476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.744659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.744827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.744852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.744999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.745122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.745149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.745286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.745459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.745485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.745626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.745759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.745780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.745957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.746142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.746169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.746393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.746557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.746583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.746792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.746949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.746976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.747153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.747365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.747392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.747554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.747722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.747772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.747980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.748194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.748236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.013 qpair failed and we were unable to recover it. 00:21:00.013 [2024-04-18 13:50:02.748406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.748602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.013 [2024-04-18 13:50:02.748637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.748757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.748918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.748944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.749136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.749262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.749285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.749436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.749597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.749625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.749790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.749969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.749995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.750187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.750378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.750401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.750581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.750829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.750864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.751015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.751155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.751188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.751324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.751458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.751480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.751671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.751842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.751869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.752000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.752151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.752191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.752331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.752465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.752500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.752687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.752861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.752888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.753149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.753309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.753336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.753465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.753609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.753631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.753839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.754022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.754048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.754278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.754459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.754486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.754696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.754800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.754839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.754985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.755101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.755127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.755290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.755424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.755447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.755631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.755780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.755815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.755995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.756215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.756243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.756420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.756646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.756673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.756859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.757046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.757071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.757306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.757475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.757502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.757665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.757821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.757847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.758028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.758269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.758307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.758464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.758648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.758704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.758863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.759067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.759093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.759286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.759460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.759487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.759680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.759887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.759914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.014 [2024-04-18 13:50:02.760069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.760216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.014 [2024-04-18 13:50:02.760244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.014 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.760396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.760535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.760557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.760734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.760891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.760926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.761215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.761342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.761369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.761525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.761633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.761660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.761840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.762023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.762049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.762283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.762410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.762436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.762649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.762832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.762866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.763048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.763175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.763207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.763357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.763475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.763502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.763787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.763959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.763986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.764126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.764300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.764327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.764511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.764704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.764731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.764899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.765036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.765072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.765281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.765440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.765477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.765640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.765841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.765867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.766103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.766278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.766306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.766565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.766748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.766775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.766962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.767109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.767135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.767324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.767528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.767556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.767688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.767871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.767897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.768065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.768234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.768262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.768449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.768605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.768626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.768794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.768985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.769012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.769167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.769347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.769375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.769544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.769685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.769706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.769871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.770019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.770046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.770223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.770419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.015 [2024-04-18 13:50:02.770446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.015 qpair failed and we were unable to recover it. 00:21:00.015 [2024-04-18 13:50:02.770622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.770780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.770816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.770992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.771246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.771274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.771450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.771593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.771620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.771800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.772006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.772033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.772334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.772518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.772545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.772732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.772947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.772984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.773154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.773336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.773363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.773571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.773691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.773717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.773877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.774020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.774047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.774202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.774392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.774418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.774560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.774707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.774733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.774930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.775101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.775129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.775338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.775514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.775541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.775780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.775939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.775965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.776104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.776223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.776251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.776399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.776502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.776523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.776732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.776913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.776939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.777077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.777259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.777288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.777480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.777713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.777764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.777946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.778185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.778213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.778397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.778653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.016 [2024-04-18 13:50:02.778680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.016 qpair failed and we were unable to recover it. 00:21:00.016 [2024-04-18 13:50:02.778843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.779004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.779045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.289 qpair failed and we were unable to recover it. 00:21:00.289 [2024-04-18 13:50:02.779207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.779401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.779429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.289 qpair failed and we were unable to recover it. 00:21:00.289 [2024-04-18 13:50:02.779610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.779808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.779836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.289 qpair failed and we were unable to recover it. 00:21:00.289 [2024-04-18 13:50:02.779987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.780148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.780172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.289 qpair failed and we were unable to recover it. 00:21:00.289 [2024-04-18 13:50:02.780407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.780582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.780609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.289 qpair failed and we were unable to recover it. 00:21:00.289 [2024-04-18 13:50:02.780757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.780915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.780942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.289 qpair failed and we were unable to recover it. 00:21:00.289 [2024-04-18 13:50:02.781122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.781319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.781347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.289 qpair failed and we were unable to recover it. 00:21:00.289 [2024-04-18 13:50:02.781525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.781644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.781671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.289 qpair failed and we were unable to recover it. 00:21:00.289 [2024-04-18 13:50:02.781855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.782023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.782050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.289 qpair failed and we were unable to recover it. 00:21:00.289 [2024-04-18 13:50:02.782236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.782455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.782482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.289 qpair failed and we were unable to recover it. 00:21:00.289 [2024-04-18 13:50:02.782662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.782919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.782946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.289 qpair failed and we were unable to recover it. 00:21:00.289 [2024-04-18 13:50:02.783088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.783390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.783418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.289 qpair failed and we were unable to recover it. 00:21:00.289 [2024-04-18 13:50:02.783575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.783706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.783728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.289 qpair failed and we were unable to recover it. 00:21:00.289 [2024-04-18 13:50:02.783949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.784122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.784149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.289 qpair failed and we were unable to recover it. 00:21:00.289 [2024-04-18 13:50:02.784290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.784437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.784465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.289 qpair failed and we were unable to recover it. 00:21:00.289 [2024-04-18 13:50:02.784653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.784777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.289 [2024-04-18 13:50:02.784815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.784964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.785118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.785149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.785369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.785507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.785566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.785741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.785947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.786006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.786209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.786403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.786430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.786573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.786741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.786768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.786970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.787123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.787160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.787337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.787475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.787502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.787665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.787821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.787848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.787997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.788144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.788166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.788358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.788550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.788602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.788796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.789015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.789042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.789245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.789489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.789527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.789729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.789931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.789957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.790171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.790333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.790360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.790534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.790650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.790672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.790846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.791089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.791116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.791285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.791414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.791441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.791746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.791990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.792028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.792240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.792410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.792438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.792638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.792761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.792787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.792915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.793034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.793058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.793276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.793436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.793462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.793682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.793835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.793861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.794129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.794308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.794337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.794511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.794713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.794739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.795020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.795276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.795303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.795585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.795709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.795735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.795898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.796048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.796075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.290 qpair failed and we were unable to recover it. 00:21:00.290 [2024-04-18 13:50:02.796225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.290 [2024-04-18 13:50:02.796387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.796414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.796538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.796684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.796706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.796881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.797035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.797061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.797292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.797421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.797448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.797663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.797813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.797840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.798060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.798238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.798266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.798474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.798601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.798627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.798829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.799036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.799064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.799238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.799484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.799519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.799642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.799788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.799836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.800016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.800188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.800216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.800442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.800694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.800743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.800910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.801247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.801274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.801538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.801720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.801769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.801979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.802126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.802153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.802404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.802597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.802624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.802821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.802950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.802989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.803161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.803359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.803396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.803595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.803762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.803789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.804033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.804198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.804226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.804373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.804639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.804666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.804844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.805071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.805098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.805291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.805497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.805525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.805791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.805943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.805975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.806143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.806292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.806327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.806612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.806798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.806825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.807034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.807208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.807236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.807390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.807636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.807673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.291 qpair failed and we were unable to recover it. 00:21:00.291 [2024-04-18 13:50:02.807902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.291 [2024-04-18 13:50:02.808156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.808199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.808373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.808627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.808654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.808881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.809141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.809168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.809465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.809690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.809717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.809905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.810230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.810267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.810527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.810702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.810729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.810975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.811136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.811163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.811394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.811564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.811602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.811866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.812042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.812069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.812290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.812523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.812551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.812663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.812816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.812843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.813028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.813203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.813231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.813446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.813649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.813676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.813860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.814087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.814114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.814293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.814456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.814484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.814616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.814758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.814780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.814923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.815037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.815064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.815230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.815407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.815434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.815670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.815837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.815878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.816025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.816187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.816215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.816469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.816624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.816651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.816855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.817040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.817080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.817346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.817529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.817556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.817781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.817962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.817989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.818156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.818311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.818356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.818512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.818663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.818691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.818822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.818976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.292 [2024-04-18 13:50:02.819007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.292 qpair failed and we were unable to recover it. 00:21:00.292 [2024-04-18 13:50:02.819201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.819481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.819509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.819712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.819853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.819879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.820082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.820258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.820286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.820534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.820676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.820715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.820835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.821051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.821079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.821296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.821457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.821483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.821648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.821847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.821874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.822057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.822275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.822303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.822559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.822790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.822817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.822998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.823258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.823287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.823460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.823662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.823689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.824079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.824275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.824314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.824538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.824700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.824727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.824946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.825113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.825139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.825309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.825462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.825489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.825651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.825892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.825918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.826073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.826278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.826307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.826495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.826641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.826667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.826798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.826953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.826975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.827141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.827279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.827311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.827623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.827771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.827797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.827922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.828068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.828090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.828304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.828626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.828653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.828844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.829029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.829056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.829225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.829406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.829433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.829647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.829829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.829856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.830020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.830205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.830232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.830381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.830503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.830525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.293 qpair failed and we were unable to recover it. 00:21:00.293 [2024-04-18 13:50:02.830753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.293 [2024-04-18 13:50:02.830907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.830934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.831125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.831254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.831281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.831488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.831731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.831769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.831880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.832058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.832084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.832254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.832453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.832480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.832665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.832835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.832860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.833029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.833211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.833239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.833476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.833636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.833662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.833844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.833970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.834005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.834173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.834366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.834392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.834568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.834771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.834807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.834961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.835191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.835218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.835434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.835667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.835694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.835905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.836037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.836063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.836238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.836366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.836390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.836613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.836783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.836830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.836984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.837239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.837267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.837549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.837779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.837805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.837978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.838112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.838145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.838331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.838483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.838510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.838659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.838765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.838787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.838941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.839088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.839115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.839272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.839422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.294 [2024-04-18 13:50:02.839449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.294 qpair failed and we were unable to recover it. 00:21:00.294 [2024-04-18 13:50:02.839608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.839770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.839810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.839957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.840073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.840100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.840276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.840419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.840446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.840656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.840806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.840833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.841063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.841259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.841287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.841432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.841577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.841604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.841755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.841922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.841960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.842110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.842255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.842282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.842459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.842631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.842658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.842809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.842940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.842962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.843112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.843262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.843291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.843415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.843628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.843656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.843787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.843948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.843971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.844145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.844297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.844320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.844475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.844627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.844654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.844808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.845041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.845068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.845211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.845360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.845387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.845551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.845699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.845726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.845978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.846163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.846195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.846346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.846465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.846496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.846639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.846825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.846856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.847038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.847184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.847208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.847369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.847494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.847521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.847638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.847765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.847792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.847939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.848091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.848113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.848272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.848393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.848416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.848534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.848665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.848692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.848843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.848956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.848979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.295 [2024-04-18 13:50:02.849169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.849322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.295 [2024-04-18 13:50:02.849350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.295 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.849488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.849667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.849699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.849867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.850043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.850070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.850223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.850372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.850399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.850527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.850644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.850671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.850844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.851022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.851049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.851257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.851402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.851429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.851631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.851796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.851823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.852039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.852145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.852191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.852347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.852472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.852499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.852651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.852798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.852826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.852963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.853109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.853131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.853312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.853445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.853468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.853616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.853773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.853800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.853931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.854198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.854222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.854389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.854512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.854551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.854765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.854930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.854958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.855103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.855268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.855293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.855409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.855563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.855590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.855772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.855921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.855948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.856167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.856334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.856358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.856509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.856694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.856717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.856894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.857048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.857070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.857201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.857316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.857340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.857470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.857646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.857673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.857800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.857956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.857983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.858130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.858265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.858289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.858402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.858516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.858543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.858695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.858842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.858869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.296 qpair failed and we were unable to recover it. 00:21:00.296 [2024-04-18 13:50:02.859037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.296 [2024-04-18 13:50:02.859197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.859237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.859354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.859478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.859501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.859699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.859837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.859860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.860039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.860173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.860210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.860360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.860520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.860547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.860694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.860834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.860861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.861094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.861254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.861280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.861446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.861593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.861621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.861765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.861981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.862003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.862166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.862321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.862349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.862488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.862632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.862659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.862795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.862908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.862932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.863062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.863248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.863274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.863424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.863546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.863568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.863745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.863883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.863905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.864053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.864154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.864196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.864343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.864483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.864510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.864696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.864865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.864886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.865088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.865245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.865269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.865384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.865526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.865553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.865749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.865915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.865936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.866106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.866244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.866268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.866440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.866615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.866637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.866828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.867045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.867070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.867256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.867435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.867462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.867608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.867828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.867885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.868016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.868151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.868218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.868359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.868534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.868561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.868711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.868856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.297 [2024-04-18 13:50:02.868883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.297 qpair failed and we were unable to recover it. 00:21:00.297 [2024-04-18 13:50:02.869086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.869221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.869264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.869447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.869619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.869649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.869850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.870017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.870039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.870220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.870354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.870381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.870558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.870752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.870778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.871033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.871171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.871201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.871353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.871515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.871542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.871742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.871910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.871931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.872138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.872286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.872310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.872482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.872644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.872679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.872833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.872959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.872980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.873104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.873321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.873345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.873504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.873643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.873664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.873792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.873942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.873963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.874086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.874237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.874261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.874410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.874574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.874596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.874747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.874885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.874907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.875135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.875295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.875320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.875435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.875622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.875644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.875796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.875987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.876008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.876172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.876318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.876343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.876492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.876632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.876654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.876808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.876970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.876991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.877183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.877298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.877322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.298 qpair failed and we were unable to recover it. 00:21:00.298 [2024-04-18 13:50:02.877484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.877674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.298 [2024-04-18 13:50:02.877695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.877850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.877985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.878006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.878173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.878297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.878320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.878446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.878598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.878620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.878766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.878898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.878919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.879070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.879217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.879242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.879354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.879498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.879521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.879660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.879798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.879819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.879966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.880103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.880125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.880266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.880412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.880436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.880621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.880756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.880778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.880907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.881052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.881075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.881211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.881364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.881389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.881570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.881729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.881750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.881902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.882057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.882093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.882226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.882342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.882367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.882497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.882636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.882658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.882771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.882908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.882930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.883115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.883269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.883294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.883413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.883563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.883584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.883729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.883871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.883893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.884023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.884198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.884243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.884390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.884521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.884543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.884679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.884796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.884818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.884931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.885097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.885119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.885276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.885395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.885422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.885602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.885752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.885778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.885940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.886093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.886120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.299 [2024-04-18 13:50:02.886291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.886439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.299 [2024-04-18 13:50:02.886466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.299 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.886647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.886788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.886816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.887005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.887161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.887196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.887345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.887474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.887497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.887691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.887827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.887850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.888042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.888189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.888212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.888346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.888480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.888502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.888632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.888737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.888759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.888928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.889068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.889104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.889312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.889430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.889467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.889615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.889772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.889799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.889994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.890136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.890163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.890346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.890495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.890522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.890695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.890854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.890881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.891046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.891211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.891250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.891387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.891544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.891566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.891706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.891839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.891861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.892013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.892172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.892220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.892357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.892470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.892506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.892651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.892782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.892805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.892971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.893088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.893110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.893253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.893382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.893404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.893567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.893680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.893716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.893849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.893963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.893985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.894133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.894273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.894295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.894437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.894548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.894570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.894728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.894864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.894886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.895023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.895186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.895209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.300 [2024-04-18 13:50:02.895339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.895457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.300 [2024-04-18 13:50:02.895493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.300 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.895630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.895763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.895785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.895941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.896060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.896082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.896264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.896377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.896401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.896616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.896727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.896749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.896891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.897025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.897046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.897233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.897352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.897375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.897517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.897617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.897639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.897800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.897908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.897930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.898061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.898191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.898214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.898338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.898459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.898495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.898640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.898792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.898814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2683206 Killed "${NVMF_APP[@]}" "$@" 00:21:00.301 [2024-04-18 13:50:02.898993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.899116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.899138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 13:50:02 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:21:00.301 [2024-04-18 13:50:02.899279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.899393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 13:50:02 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:00.301 [2024-04-18 13:50:02.899416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 13:50:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:00.301 [2024-04-18 13:50:02.899592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 13:50:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:00.301 [2024-04-18 13:50:02.899757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.899782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 13:50:02 -- common/autotest_common.sh@10 -- # set +x 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.899907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.900062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.900086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.900288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.900403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.900428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.900582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.900746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.900783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.900963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.901072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.901099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.901252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.901401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.901425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.901627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.901766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.901787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.901928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.902068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.902090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.902249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.902369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.902394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 [2024-04-18 13:50:02.902553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.902712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.902735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 13:50:02 -- nvmf/common.sh@470 -- # nvmfpid=2683820 00:21:00.301 [2024-04-18 13:50:02.902842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 13:50:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:00.301 13:50:02 -- nvmf/common.sh@471 -- # waitforlisten 2683820 00:21:00.301 [2024-04-18 13:50:02.902957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.301 [2024-04-18 13:50:02.902987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.301 qpair failed and we were unable to recover it. 00:21:00.301 13:50:02 -- common/autotest_common.sh@817 -- # '[' -z 2683820 ']' 00:21:00.301 13:50:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.301 13:50:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:00.302 13:50:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.302 13:50:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:00.302 13:50:02 -- common/autotest_common.sh@10 -- # set +x 00:21:00.302 [2024-04-18 13:50:02.905371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.905555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.905588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.905729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.905873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.905899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.906021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.906193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.906236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.906401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.906553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.906583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.906739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.906943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.906968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.907119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.907281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.907311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.907493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.907701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.907731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.907924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.908099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.908123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.908305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.908468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.908501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.908654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.908857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.908902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.909049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.909201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.909227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.909379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.909539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.909568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.909730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.909909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.909938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.910098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.910278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.910306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.912204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.912379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.912406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.912540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.912746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.912772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.912965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.913154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.913193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.913363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.913495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.913525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.913673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.913824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.913854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.914046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.914222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.914249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.914431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.914579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.914606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.914741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.914921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.914951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.915090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.915257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.915284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.915416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.915589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.915618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.915777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.915928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.915957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.916272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.916428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.916470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.302 qpair failed and we were unable to recover it. 00:21:00.302 [2024-04-18 13:50:02.918192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.918349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.302 [2024-04-18 13:50:02.918379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.918574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.918723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.918748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.918876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.919064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.919093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.919227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.919353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.919382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.919563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.919684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.919710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.919829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.919973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.920018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.920287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.920415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.920444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.920615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.920758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.920802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.920934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.921068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.921097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.921255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.923191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.923226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.923384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.923564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.923596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.923741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.923896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.923925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.924061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.924264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.924293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.924443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.924600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.924627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.924753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.924915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.924943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.925083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.925244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.925274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.925406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.925557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.925582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.925716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.925871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.925900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.926047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.926199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.926228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.926373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.926525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.926552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.926691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.928191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.928223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.928370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.928525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.928553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.928720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.928882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.928907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.929045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.929254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.929281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.303 [2024-04-18 13:50:02.929425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.929560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.303 [2024-04-18 13:50:02.929589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.303 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.929745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.929870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.929896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.930030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.930134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.930159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.930295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.930410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.930437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.930625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.930775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.930804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.930971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.931123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.931151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.933252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.933388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.933417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.933569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.933700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.933725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.933855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.933999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.934027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.934191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.934319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.934351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.934501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.934618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.934643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.934780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.934910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.934937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.935091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.935246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.935275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.935427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.935553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.935578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.935709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.935881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.935907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.936073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.936195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.936220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.936344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.936470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.936511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.936678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.936832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.936859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.937006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.938269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.938298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.938446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.938607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.938634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.938804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.938954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.938981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.939097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.939252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.939278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.939402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.939555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.939582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.939727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.939872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.939897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.940018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.940163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.940204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.940329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.940467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.940491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.940637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.943192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.943225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.943374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.943503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.943533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.943693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.943867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.943893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.304 qpair failed and we were unable to recover it. 00:21:00.304 [2024-04-18 13:50:02.944027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.304 [2024-04-18 13:50:02.944153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.944189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.944330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.944465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.944495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.944666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.944811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.944855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.945007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.945163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.945197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.945341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.945461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.945488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.945609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.945747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.945772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.945936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.946126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.946155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.946330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.946457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.946485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.946648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.946843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.946870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.947028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.947163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.947199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.947332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.947456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.947484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.947658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.947820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.947846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.948007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.948151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.948183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.949192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.949328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.949357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.949508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.949643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.949677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.949861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.949983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.950009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.950147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.950288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.950316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.950452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.950605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.950630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.950776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.950914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.950941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.953191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.953329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.953358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.953537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.953711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.953737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.953875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.954072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.954100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.954246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.954372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.954399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.954510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.954691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.954716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.954838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.954981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.955006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.955149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.955290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.955319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.955483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.955604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.955628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.955886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.956036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.956064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.305 qpair failed and we were unable to recover it. 00:21:00.305 [2024-04-18 13:50:02.956200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.305 [2024-04-18 13:50:02.956347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.956375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.956599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.956735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.956761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.956927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.957156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.957191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.957338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.957466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.957515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.957633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.957747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.957774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.959192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.959338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.959368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.959522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.959769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.959798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.959977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.960165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.960198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.960435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.960679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.960709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.960850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.961020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.961049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.961272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.961396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.961423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.961640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.961854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.961896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.962091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.964193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.964226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.964370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.964525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.964554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.964749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.964995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.965030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.965189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.965327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.965352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.965487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.965679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.965704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.965868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.966126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.966151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.966272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.966390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.966417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.966573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.966766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.966796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.967000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.967288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.967314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.967431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.967555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.967580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.967717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.967912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.967945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.968197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.968319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.968344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.968485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.968641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.968665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.968848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.969023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.969047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.969197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.969321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.969346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.969466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.969632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.969671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.306 [2024-04-18 13:50:02.969864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.970104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.306 [2024-04-18 13:50:02.970128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.306 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.970259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.970382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.970406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.970558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.970749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.970772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.970925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.971104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.971128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.971281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.971399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.971423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.971555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.971767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.971791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.971948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.972141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.972165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.972351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.972489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.972520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.972720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.972869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.972899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.973076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.973249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.973280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.973434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.973584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.973614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.973763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.973921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.973947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.974205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.974336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.974361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.974493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.974705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.974730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.974880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.975007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.975031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.975239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.975354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.975380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.975532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.975693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.975718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.975747] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:21:00.307 [2024-04-18 13:50:02.975811] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.307 [2024-04-18 13:50:02.975868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.975980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.976003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.976117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.976238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.976262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.976390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.976538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.976565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.976731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.976850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.976875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.977007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.977151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.977182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.977311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.977438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.977466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.977641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.977818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.977843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.978012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.978133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.978159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.978351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.978476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.978510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.978707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.978857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.978881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.979027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.979170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.307 [2024-04-18 13:50:02.979203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.307 qpair failed and we were unable to recover it. 00:21:00.307 [2024-04-18 13:50:02.979343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.979523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.979551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.979682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.979855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.979880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.980039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.980157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.980186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.980304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.980459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.980483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.980667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.980807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.980831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.980996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.981137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.981162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.981288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.981403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.981427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.981568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.981732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.981761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.985189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.985358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.985387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.985517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.985666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.985693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.985817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.985952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.985974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.986135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.986295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.986323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.986475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.986599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.986625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.986781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.986893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.986914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.987076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.987242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.987268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.987409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.987568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.987596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.987753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.987892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.987914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.988101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.988253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.988286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.988407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.988585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.988612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.988764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.988914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.988937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.989096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.989217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.989244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.989397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.989570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.989598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.989748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.989867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.989891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.308 qpair failed and we were unable to recover it. 00:21:00.308 [2024-04-18 13:50:02.990119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.308 [2024-04-18 13:50:02.990281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.990311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.990462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.990615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.990643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.990805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.990919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.990942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.991142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.991335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.991366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.991487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.991634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.991669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.991844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.992029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.992057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.992242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.992360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.992386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.992586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.992718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.992744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.992921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.993094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.993119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.993295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.993439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.993465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.993635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.993781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.993804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.993922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.994156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.994183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.994342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.994472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.994498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.994682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.994797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.994821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.994976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.995084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.995107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.995271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.995444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.995485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.995647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.995753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.995776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.995941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.996051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.996075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.996249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.996388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.996414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.996577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.996733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.996756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.996934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.997070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.997107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.997252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.997396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.997422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.997578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.997745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.997768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.997940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.998079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.998119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.998264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.998382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.998407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.998560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.998664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.998688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.998808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.998944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.998968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.309 [2024-04-18 13:50:02.999151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.999326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.309 [2024-04-18 13:50:02.999351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.309 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:02.999492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:02.999628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:02.999666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:02.999830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:02.999955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:02.999979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.000188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.000346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.000371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.000492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.000623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.000646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.000771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.000910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.000934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.001084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.001242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.001281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.001490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.001601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.001624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.001752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.001915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.001939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.002094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.002244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.002270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.002395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.002535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.002558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.002705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.002820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.002843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.003028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.003188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.003213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.003369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.003508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.003533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.003700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.003867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.003903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.004054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.004219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.004244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.004396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.004575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.004598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.004727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.004927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.004963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.005120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.005283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.005323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.005472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.005609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.005646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.005791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.005957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.005980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.006189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.006382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.006406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.006530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.006721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.006742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.006922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.007059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.007097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.007266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.007406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.007430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.007581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.007696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.007719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.007846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.008010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.008033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.008157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.008322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.008347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.310 [2024-04-18 13:50:03.008502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.008645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.310 [2024-04-18 13:50:03.008683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.310 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.008847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.008977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.009000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.009125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.009245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.009269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.009382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.009565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.009602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.009779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.009916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.009954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.010101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.010244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.010269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.010431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.010588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.010610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.010749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.010902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.010925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.011118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.011267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.011291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.011408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.011584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.011620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.011770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.011915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.011938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.012080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.012214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.012239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.012375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.012519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.012557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.012676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.012816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.012840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.012964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.013128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.013166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.013313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.013429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.013468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.013579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.013715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.013738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.013877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.014044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.014081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.014223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.014382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.014407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.014601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.014739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.014761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.014887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.015006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.015029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.015207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.015367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.015392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.015561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.015715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.015752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.015871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.015991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.016014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.016202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.016342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.016365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.016517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.016652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.016676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.016826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.016988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.017011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.017166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.017325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.017348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.017531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.017678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.311 [2024-04-18 13:50:03.017716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.311 qpair failed and we were unable to recover it. 00:21:00.311 [2024-04-18 13:50:03.017865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.018005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.018029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.018169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.018307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.018331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.018512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.018668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.018706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.018830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.018952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.018976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.019173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.019320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.019343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.019526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.019708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.019732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.019861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.019998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.020022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.020153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.020293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.020319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.312 [2024-04-18 13:50:03.020506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.020674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.020696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.020851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.020996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.021020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.021201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.021378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.021402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.021570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.021688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.021711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.021837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.022003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.022026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.022200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.022348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.022372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.022544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.022678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.022701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.022852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.022965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.022989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.023144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.023313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.023339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.023451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.023577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.023601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.023797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.023975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.024014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.024174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.024307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.024334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.024466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.024649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.024674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.024827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.024970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.024994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.025156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.025307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.025331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.025490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.025639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.025662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.025824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.025966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.025990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.312 qpair failed and we were unable to recover it. 00:21:00.312 [2024-04-18 13:50:03.026150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.026338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.312 [2024-04-18 13:50:03.026362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.026535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.026713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.026736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.026890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.027017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.027040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.027216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.027371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.027395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.027577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.027724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.027762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.027916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.028079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.028117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.028287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.028461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.028498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.028607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.028741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.028764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.028911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.029051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.029075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.029253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.029400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.029425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.029571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.029682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.029704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.029892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.030027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.030064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.030195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.030333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.030358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.030485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.030623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.030647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.030804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.030943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.030967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.031167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.031337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.031362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.031521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.031669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.031692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.031844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.031982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.032005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.032204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.032344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.032371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.032521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.032700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.032723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.032853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.032961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.032984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.033148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.033320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.033344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.033510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.033647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.033671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.033825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.033964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.033987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.313 [2024-04-18 13:50:03.034126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.034284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.313 [2024-04-18 13:50:03.034308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.313 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.034444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.034628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.034664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.034842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.034984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.035008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.035190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.035335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.035360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.035540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.035717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.035740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.035893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.036008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.036031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.036165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.036305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.036329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.036449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.036607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.036631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.036797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.036935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.036971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.037100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.037266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.037290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.037486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.037639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.037662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.037834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.037974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.038012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.038142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.038284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.038309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.038476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.038643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.038680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.038835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.038975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.038998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.039175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.039329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.039353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.039530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.039649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.039673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.039833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.039956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.039979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.040136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.040287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.040311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.040473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.040611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.040634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.040817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.040959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.040983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.041093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.041219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.041245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.041359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.041509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.041535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.041694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.041837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.041861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.041992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.042130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.042153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.042312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.042449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.042474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.042624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.042790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.042827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.314 [2024-04-18 13:50:03.042978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.043138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.314 [2024-04-18 13:50:03.043181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.314 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.043338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.043453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.043492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.043609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.043744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.043767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.043921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.044038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.044061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.044231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.044369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.044394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.044527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.044679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.044706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.044847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.045012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.045036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.045206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.045352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.045391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.045542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.045685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.045709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.045819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.045980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.046004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.046165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.046329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.046353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.046524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.046635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.046659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.046825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.046959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.046996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.047134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.047329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.047371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.047558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.047699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.047722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.047883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.048014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.048043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.048195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.048336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.048361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.048539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.048677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.048700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.048853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.048992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.049016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.049206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.049344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.049370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.049501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.049682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.049706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.049839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.049950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.049974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.050139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.050299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.050324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.050487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.050653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.050690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.050871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.051011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.051050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.051188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.051374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.051398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.315 qpair failed and we were unable to recover it. 00:21:00.315 [2024-04-18 13:50:03.051584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.315 [2024-04-18 13:50:03.051724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.051747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.051904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.052015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.052039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.052212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.052382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.052406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.052576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.052713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.052751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.052913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.053050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.053073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.053200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.053365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.053390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.053547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.053665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.053689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.053834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.053901] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.316 [2024-04-18 13:50:03.054001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.054039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.054218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.054325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.054349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.054461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.054599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.054626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.054765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.054927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.054951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.055140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.055326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.055350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.055515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.055662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.055700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.055863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.055975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.055998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.056200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.056361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.056387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.056518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.056681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.056717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.056875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.057041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.057079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.057208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.057390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.057415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.057575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.057715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.057753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.057885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.058051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.058078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.058241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.058369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.058393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.058553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.058686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.058709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.058868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.059031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.059055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.059239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.059388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.059413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.059543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.059703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.059726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.316 qpair failed and we were unable to recover it. 00:21:00.316 [2024-04-18 13:50:03.059884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.316 [2024-04-18 13:50:03.060050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.060073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.060246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.060385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.060410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.060588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.060764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.060787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.060922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.061093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.061131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.061305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.061447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.061476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.061641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.061825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.061847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.061996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.062130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.062153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.062344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.062486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.062509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.062675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.062838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.062862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.063030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.063145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.063189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.063351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.063495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.063520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.063696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.063836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.063860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.064006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.064145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.064168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.064377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.064527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.064564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.064716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.064851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.064875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.065044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.065189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.065214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.065366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.065483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.065508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.065638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.065749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.065772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.065935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.066098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.066136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.066302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.066475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.066513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.066651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.066812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.066851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.066972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.067092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.067116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.067271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.067386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.067411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.067559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.067716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.067740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.067861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.067991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.068014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.317 qpair failed and we were unable to recover it. 00:21:00.317 [2024-04-18 13:50:03.068180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.068341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.317 [2024-04-18 13:50:03.068381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.068542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.068656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.068680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.068845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.068954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.068977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.069172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.069311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.069349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.069506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.069659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.069684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.069873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.069990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.070013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.070206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.070345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.070371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.070552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.070721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.070745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.070926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.071068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.071107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.071240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.071382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.071408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.071611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.071729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.071766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.071915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.072075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.072099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.072270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.072413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.072438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.072606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.072766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.072804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.072952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.073085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.073109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.073239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.073355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.073379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.073505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.073653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.073676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.073870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.073979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.074002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.074174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.074306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.074332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.074537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.074676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.074700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.074856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.074999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.075022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.075159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.075295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.075321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.075508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.075676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.075701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.075830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.075943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.075967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.318 qpair failed and we were unable to recover it. 00:21:00.318 [2024-04-18 13:50:03.076111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.318 [2024-04-18 13:50:03.076267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.076293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.319 qpair failed and we were unable to recover it. 00:21:00.319 [2024-04-18 13:50:03.076454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.076641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.076664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.319 qpair failed and we were unable to recover it. 00:21:00.319 [2024-04-18 13:50:03.076845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.077003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.077028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.319 qpair failed and we were unable to recover it. 00:21:00.319 [2024-04-18 13:50:03.077222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.077364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.077389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.319 qpair failed and we were unable to recover it. 00:21:00.319 [2024-04-18 13:50:03.077504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.077697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.077736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.319 qpair failed and we were unable to recover it. 00:21:00.319 [2024-04-18 13:50:03.077858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.077982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.078006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.319 qpair failed and we were unable to recover it. 00:21:00.319 [2024-04-18 13:50:03.078186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.078308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.078334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.319 qpair failed and we were unable to recover it. 00:21:00.319 [2024-04-18 13:50:03.078487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.078652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.078689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.319 qpair failed and we were unable to recover it. 00:21:00.319 [2024-04-18 13:50:03.078824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.078943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.078967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.319 qpair failed and we were unable to recover it. 00:21:00.319 [2024-04-18 13:50:03.079183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.079316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.079343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.319 qpair failed and we were unable to recover it. 00:21:00.319 [2024-04-18 13:50:03.079520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.079705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.079731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.319 qpair failed and we were unable to recover it. 00:21:00.319 [2024-04-18 13:50:03.079894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.080061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.080087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.319 qpair failed and we were unable to recover it. 00:21:00.319 [2024-04-18 13:50:03.080243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.080378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.080402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.319 qpair failed and we were unable to recover it. 00:21:00.319 [2024-04-18 13:50:03.080573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.080720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.080757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.319 qpair failed and we were unable to recover it. 00:21:00.319 [2024-04-18 13:50:03.080921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.081128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.319 [2024-04-18 13:50:03.081154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.319 qpair failed and we were unable to recover it. 00:21:00.319 [2024-04-18 13:50:03.081343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.081492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.081516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.591 qpair failed and we were unable to recover it. 00:21:00.591 [2024-04-18 13:50:03.081684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.081827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.081852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.591 qpair failed and we were unable to recover it. 00:21:00.591 [2024-04-18 13:50:03.082026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.082196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.082222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.591 qpair failed and we were unable to recover it. 00:21:00.591 [2024-04-18 13:50:03.082382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.082529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.082554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.591 qpair failed and we were unable to recover it. 00:21:00.591 [2024-04-18 13:50:03.082700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.082864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.082905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.591 qpair failed and we were unable to recover it. 00:21:00.591 [2024-04-18 13:50:03.083037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.083184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.083212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.591 qpair failed and we were unable to recover it. 00:21:00.591 [2024-04-18 13:50:03.083359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.083478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.083503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.591 qpair failed and we were unable to recover it. 00:21:00.591 [2024-04-18 13:50:03.083617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.083744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.083770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.591 qpair failed and we were unable to recover it. 00:21:00.591 [2024-04-18 13:50:03.083947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.084053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.084078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.591 qpair failed and we were unable to recover it. 00:21:00.591 [2024-04-18 13:50:03.084249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.084389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.084413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.591 qpair failed and we were unable to recover it. 00:21:00.591 [2024-04-18 13:50:03.084569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.084723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.084763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.591 qpair failed and we were unable to recover it. 00:21:00.591 [2024-04-18 13:50:03.084939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.085079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.085105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.591 qpair failed and we were unable to recover it. 00:21:00.591 [2024-04-18 13:50:03.085296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.085439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.085463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.591 qpair failed and we were unable to recover it. 00:21:00.591 [2024-04-18 13:50:03.085628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.085782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.591 [2024-04-18 13:50:03.085820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.591 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.085948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.086090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.086114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.086270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.086429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.086470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.086618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.086801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.086824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.086981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.087126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.087149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.087311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.087452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.087492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.087661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.087806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.087843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.088021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.088131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.088154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.088323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.088460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.088485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.088641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.088811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.088836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.089020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.089138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.089163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.089356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.089515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.089552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.089678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.089824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.089849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.090009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.090152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.090196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.090349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.090516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.090541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.090674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.090788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.090812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.090962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.091116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.091139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.091301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.091444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.091470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.091623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.091767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.091791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.091954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.092122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.092146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.092318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.092438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.092478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.092620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.092759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.092784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.092924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.093066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.093091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.093223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.093410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.093434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.093590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.093733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.093758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.592 qpair failed and we were unable to recover it. 00:21:00.592 [2024-04-18 13:50:03.093944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.592 [2024-04-18 13:50:03.094058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.094082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.094257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.094430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.094455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.094613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.094749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.094773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.094908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.095051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.095075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.095244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.095388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.095412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.095578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.095768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.095791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.095914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.096052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.096076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.096229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.096347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.096372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.096546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.096684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.096721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.096863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.097006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.097029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.097203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.097352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.097378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.097576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.097707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.097744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.097922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.098081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.098105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.098233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.098354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.098381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.098547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.098662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.098687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.098818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.098951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.098975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.099132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.099259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.099283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.099441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.099608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.099646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.099798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.099942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.099967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.100097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.100229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.100256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.100426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.100569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.100606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.100720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.100865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.100888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.101047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.101155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.101201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.101358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.101517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.101555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.593 qpair failed and we were unable to recover it. 00:21:00.593 [2024-04-18 13:50:03.101684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.101810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.593 [2024-04-18 13:50:03.101834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.101974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.102143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.102167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.102330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.102477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.102516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.102662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.102808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.102845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.102966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.103111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.103135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.103311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.103457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.103496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.103637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.103806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.103844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.104005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.104118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.104143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.104318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.104436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.104481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.104658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.104799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.104826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.104985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.105153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.105199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.105373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.105554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.105579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.105736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.105854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.105878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.105992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.106128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.106152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.106325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.106448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.106488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.106656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.106812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.106837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.106992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.107129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.107153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.107291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.107431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.107455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.107609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.107717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.107742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.107907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.108024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.108052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.108248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.108405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.108431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.108562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.108703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.108742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.108901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.109044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.109069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.109237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.109394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.109419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.109586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.109698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.109722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.594 [2024-04-18 13:50:03.109885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.110024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.594 [2024-04-18 13:50:03.110048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.594 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.110232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.110378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.110403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.110536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.110678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.110702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.110886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.111054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.111091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.111227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.111367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.111397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.111554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.111730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.111755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.111911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.112018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.112042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.112229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.112398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.112423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.112550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.112703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.112727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.112876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.113028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.113052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.113229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.113411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.113436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.113588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.113722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.113760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.113908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.114047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.114070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.114252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.114420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.114444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.114566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.114704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.114731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.114897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.115013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.115037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.115209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.115330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.115354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.115541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.115658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.115698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.115831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.115944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.115968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.116123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.116270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.116297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.116466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.116609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.116647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.116801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.116939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.116962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.117121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.117313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.117340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.117463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.117611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.117636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.117829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.117967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.117990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.118136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.118302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.118342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.118455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.118596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.118620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.118782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.118948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.595 [2024-04-18 13:50:03.118972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.595 qpair failed and we were unable to recover it. 00:21:00.595 [2024-04-18 13:50:03.119130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.119315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.119341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.119500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.119675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.119698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.119877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.120053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.120075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.120242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.120358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.120383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.120565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.120707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.120730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.120905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.121040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.121078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.121231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.121375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.121400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.121559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.121732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.121754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.121889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.122002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.122026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.122154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.122287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.122312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.122485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.122636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.122659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.122784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.122927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.122951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.123093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.123230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.123256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.123388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.123552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.123576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.123735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.123902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.123926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.124045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.124164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.124210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.124340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.124481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.124505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.124670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.124814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.124837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.125030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.125188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.125214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.125386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.125561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.125585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.596 qpair failed and we were unable to recover it. 00:21:00.596 [2024-04-18 13:50:03.125740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.125881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.596 [2024-04-18 13:50:03.125904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.126094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.126212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.126238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.126349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.126495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.126519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.126705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.126843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.126867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.127031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.127141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.127165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.127349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.127503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.127529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.127697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.127880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.127903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.128041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.128216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.128242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.128353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.128489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.128513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.128685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.128848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.128885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.129044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.129187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.129213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.129340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.129472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.129496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.129681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.129843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.129868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.129994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.130163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.130221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.130380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.130493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.130519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.130649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.130792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.130816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.130977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.131119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.131143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.131292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.131455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.131479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.131665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.131774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.131798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.131983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.132122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.132158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.132342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.132481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.132520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.132668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.132820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.132844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.133027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.133191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.133215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.133395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.133546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.133570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.133724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.133866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.133889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.134053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.134208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.134247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.134390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.134541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.597 [2024-04-18 13:50:03.134564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.597 qpair failed and we were unable to recover it. 00:21:00.597 [2024-04-18 13:50:03.134723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.134832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.134856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.135004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.135143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.135166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.135287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.135428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.135451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.135562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.135698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.135721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.135871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.136005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.136028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.136192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.136305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.136328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.136489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.136631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.136654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.136814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.136927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.136951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.137141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.137348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.137375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.137498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.137649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.137673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.137825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.137959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.137982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.138141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.138313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.138338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.138473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.138609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.138632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.138791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.138927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.138950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.139076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.139241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.139265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.139422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.139580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.139618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.139757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.139909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.139932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.140092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.140233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.140257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.140413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.140567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.140604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.140738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.140876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.140900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.141062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.141188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.141213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.141347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.141473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.141498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.141682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.141860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.141884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.142037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.142196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.142222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.142350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.142519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.142544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.598 qpair failed and we were unable to recover it. 00:21:00.598 [2024-04-18 13:50:03.142705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.598 [2024-04-18 13:50:03.142848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.142872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.143028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.143142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.143165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.143338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.143476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.143501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.143670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.143787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.143811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.143973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.144117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.144141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.144305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.144450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.144474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.144600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.144740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.144764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.144923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.145068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.145092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.145257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.145388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.145414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.145585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.145746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.145783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.145962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.146097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.146135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.146306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.146423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.146447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.146637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.146754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.146778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.146960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.147072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.147096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.147258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.147407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.147431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.147592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.147736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.147774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.147939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.148114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.148153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.148319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.148438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.148464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.148620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.148760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.148783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.148936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.149075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.149099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.149257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.149401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.149426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.149605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.149716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.149738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.149868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.150033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.150056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.150238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.150382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.150407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.599 qpair failed and we were unable to recover it. 00:21:00.599 [2024-04-18 13:50:03.150564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.150706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.599 [2024-04-18 13:50:03.150745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.150901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.151047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.151071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.151238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.151380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.151405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.151529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.151674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.151698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.151889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.152031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.152055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.152238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.152417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.152441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.152627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.152795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.152832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.152988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.153171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.153202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.153356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.153548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.153572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.153755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.153860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.153883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.154047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.154215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.154240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.154393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.154514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.154542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.154717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.154891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.154913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.155066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.155237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.155260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.155421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.155615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.155638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.155786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.155925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.155948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.156089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.156225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.156249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.156409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.156605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.156628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.156781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.156932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.156956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.157144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.157277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.157302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.157478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.157650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.157674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.157829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.157966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.157994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.158194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.158359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.158383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.158549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.158668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.158692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.158857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.158998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.159022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.159154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.159325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.159349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.159508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.159670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.159709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.600 [2024-04-18 13:50:03.159859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.160027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.600 [2024-04-18 13:50:03.160065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.600 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.160223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.160364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.160388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.160516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.160681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.160705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.160865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.161003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.161026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.161207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.161352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.161380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.161553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.161701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.161738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.161897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.162037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.162061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.162252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.162418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.162444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.162589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.162741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.162780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.162963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.163134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.163160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.163302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.163468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.163508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.163648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.163758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.163782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.163928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.164086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.164113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.164259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.164375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.164401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.164576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.164729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.164757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.164940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.165096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.165122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.165272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.165443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.165484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.165609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.165789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.165814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.165986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.166133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.166173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.166338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.166448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.166474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.166647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.166790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.166815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.166974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.167144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.167169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.167314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.167477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.167517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.601 qpair failed and we were unable to recover it. 00:21:00.601 [2024-04-18 13:50:03.167735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.167900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.601 [2024-04-18 13:50:03.167936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.168242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.168364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.168390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.168570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.168712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.168735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.168985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.169190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.169217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.169353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.169489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.169515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.169774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.170034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.170059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.170224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.170391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.170416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.170618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.170766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.170792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.170995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.171183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.171224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.171395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.171542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.171568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.171767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.171891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.171916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.172183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.172323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.172349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.172503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.172837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.172863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.173117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.173281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.173307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.173451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.173637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.173664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.173723] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.602 [2024-04-18 13:50:03.173757] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.602 [2024-04-18 13:50:03.173772] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.602 [2024-04-18 13:50:03.173784] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.602 [2024-04-18 13:50:03.173795] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.602 [2024-04-18 13:50:03.173835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.173874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:00.602 [2024-04-18 13:50:03.173934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:00.602 [2024-04-18 13:50:03.173998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.174023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.173988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:00.602 [2024-04-18 13:50:03.173991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:00.602 [2024-04-18 13:50:03.174226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.174372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.174398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.174540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.174685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.174712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.174920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.175071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.175096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.175277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.175391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.175417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.175583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.175795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.175819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.176049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.176206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.176233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.176376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.176539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.176565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.602 qpair failed and we were unable to recover it. 00:21:00.602 [2024-04-18 13:50:03.176686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.176843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.602 [2024-04-18 13:50:03.176877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.177088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.177299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.177325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.177495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.177731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.177757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.177960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.178148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.178174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.178356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.178497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.178524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.178724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.178928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.178954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.179148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.179336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.179367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.179521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.179716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.179742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.179908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.180106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.180132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.180285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.180485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.180511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.180667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.180992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.181018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.181208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.181457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.181491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.181825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.182035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.182061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.182291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.182424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.182450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.182735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.182931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.182957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.183156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.183319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.183357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.183548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.183723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.183753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.183954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.184210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.184240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.184439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.184596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.184622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.184790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.184952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.184978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.185189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.185328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.185355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.185545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.185678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.185704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.185915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.186040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.186066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.186263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.186399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.186425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.186595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.186727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.186762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.186904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.187044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.187071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.603 qpair failed and we were unable to recover it. 00:21:00.603 [2024-04-18 13:50:03.187215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.187329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.603 [2024-04-18 13:50:03.187359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.187573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.187714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.187740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.187981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.188171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.188203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.188336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.188487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.188513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.188657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.188823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.188849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.189043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.189211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.189238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.189449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.189659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.189685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.189862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.190070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.190099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.190207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.190362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.190388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.190623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.190840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.190866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.191073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.191272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.191303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.191478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.191703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.191730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.191896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.192081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.192107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.192233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.192381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.192406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.192648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.192883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.192910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.193107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.193355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.193382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.193600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.193746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.193773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.193933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.194140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.194166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.194372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.194607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.194633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.194875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.195059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.195085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.195287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.195445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.195482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.195685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.195855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.195882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.196130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.196342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.196369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.196571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.196735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.604 [2024-04-18 13:50:03.196761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.604 qpair failed and we were unable to recover it. 00:21:00.604 [2024-04-18 13:50:03.196994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.197236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.197263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.197411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.197566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.197592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.197755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.197917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.197943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.198143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.198341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.198368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.198571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.198723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.198749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.198943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.199101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.199127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.199260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.199416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.199441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.199664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.199859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.199886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.200011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.200203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.200237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.200446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.200681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.200708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.200875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.201036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.201062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.201243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.201424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.201450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.201637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.201825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.201862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.202100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.202244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.202271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.202449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.202686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.202713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.202947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.203165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.203208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.203367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.203564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.203590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.203770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.203958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.203984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.204143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.204392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.204419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.204571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.204702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.204727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.204960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.205157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.205190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.205438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.205636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.205662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.205820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.206049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.206084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.206226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.206401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.206428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.206620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.206864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.206890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.207116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.207250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.207277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.207445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.207613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.207638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.605 [2024-04-18 13:50:03.207894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.208090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.605 [2024-04-18 13:50:03.208116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.605 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.208325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.208488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.208515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.208693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.208889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.208916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.209155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.209430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.209458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.209661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.209863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.209890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.210094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.210328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.210365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.210486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.210623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.210649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.210806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.211004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.211031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.211277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.211484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.211511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.211685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.211938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.211964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.212173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.212438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.212468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.212636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.212841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.212868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.213073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.213320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.213347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.213526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.213771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.213809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.213963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.214132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.214158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.214330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.214506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.214532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.214721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.214930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.214956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.215227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.215475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.215502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.215776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.216016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.216043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.216269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.216486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.216514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.216685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.216920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.216946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.217151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.217366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.217394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.217634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.217871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.217897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.606 [2024-04-18 13:50:03.218125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.218320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.606 [2024-04-18 13:50:03.218346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.606 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.218548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.218737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.218763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.218936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.219213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.219251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.219465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.219676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.219702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.219911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.220126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.220153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.220406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.220618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.220644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.220855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.220989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.221015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.221185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.221404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.221431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.221685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.221862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.221888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.222120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.222346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.222372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.222584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.222781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.222807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.222983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.223206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.223233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.223417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.223686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.223712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.223882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.224039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.224063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.224234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.224445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.224471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.224622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.224871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.224898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.225082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.225323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.225349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.225560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.225768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.225794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.226039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.226252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.226279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.226481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.226644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.226670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.226883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.227072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.227098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.227254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.227515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.227541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.227715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.227912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.227938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.228166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.228393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.228419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.228565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.228720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.228746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.228960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.229125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.229151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.607 [2024-04-18 13:50:03.229401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.229619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.607 [2024-04-18 13:50:03.229645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.607 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.229778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.229941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.229967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.230172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.230336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.230363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.230517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.230734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.230759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.230969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.231099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.231125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.231311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.231474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.231500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.231754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.231997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.232023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.232239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.232420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.232446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.232694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.232824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.232850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.233097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.233303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.233329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.233542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.233756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.233782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.233937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.234097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.234124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.234365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.234546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.234572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.234736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.234889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.234914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.235168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.235369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.235395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.235574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.235731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.235756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.236013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.236228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.236254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.236496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.236662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.236689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.236892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.237137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.237164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.237382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.237625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.237651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.237829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.238055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.238081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.238243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.238401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.238427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.238616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.238850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.238876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.239073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.239233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.239259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.239475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.239636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.239663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.239858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.240069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.240095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.240256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.240501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.240528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.608 qpair failed and we were unable to recover it. 00:21:00.608 [2024-04-18 13:50:03.240695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.608 [2024-04-18 13:50:03.240941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.240968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.241107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.241261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.241287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.241496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.241667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.241692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.241952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.242169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.242207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.242380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.242628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.242654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.242853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.243055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.243091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.243312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.243553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.243594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.243848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.244021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.244046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.244284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.244551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.244577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.244771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.244969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.244994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.245169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.245325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.245351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.245515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.245776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.245802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.245966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.246187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.246227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.246452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.246702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.246727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.246970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.247193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.247225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.247437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.247678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.247704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.247964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.248190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.248217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.248456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.248588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.248614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.248785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.249020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.249045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.249303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.249552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.249578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.249765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.249971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.249997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.250239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.250497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.250523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.250749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.250963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.250989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.251220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.251389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.251415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.251546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.251764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.251794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.252052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.252252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.252280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.609 qpair failed and we were unable to recover it. 00:21:00.609 [2024-04-18 13:50:03.252490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.252704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.609 [2024-04-18 13:50:03.252730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.252982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.253122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.253148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.253370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.253632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.253658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.253925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.254180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.254207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.254406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.254657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.254683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.254909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.255131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.255157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.255438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.255642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.255667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.255939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.256154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.256200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.256470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.256640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.256669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.256871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.257093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.257119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.257339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.257595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.257621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.257858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.258023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.258047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.258322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.258578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.258604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.258843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.259136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.259190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.259455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.259624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.259649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.259824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.260062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.260087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.260358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.260623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.260649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.260877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.261181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.261208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.261481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.261717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.261743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.262018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.262228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.262256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.262451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.262674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.262699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.262920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.263190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.263217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.263443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.263704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.263731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.263948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.264201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.264228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.264459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.264663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.264689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.610 [2024-04-18 13:50:03.264952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.265199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.610 [2024-04-18 13:50:03.265233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.610 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.265486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.265746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.265772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.265962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.266194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.266222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.266438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.266665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.266691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.266937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.267224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.267251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.267464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.267672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.267698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.267959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.268144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.268201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.268427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.268644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.268670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.268938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.269139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.269165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.269457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.269736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.269762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.269994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.270215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.270243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.270501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.270712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.270739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.271009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.271220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.271247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.271491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.271718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.271744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.271929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.272121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.272146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.272443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.272722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.272749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.272950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.273170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.273216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.273439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.273668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.273708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.273930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.274146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.274193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.274466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.274749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.274774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.275050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.275288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.275316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.275595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.275864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.275889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.276187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.276471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.611 [2024-04-18 13:50:03.276496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.611 qpair failed and we were unable to recover it. 00:21:00.611 [2024-04-18 13:50:03.276835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.277066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.277091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.277328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.277556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.277582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.277760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.277985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.278010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.278289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.278517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.278557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.278834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.279056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.279081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.279365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.279545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.279569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.279750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.279953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.279978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.280234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.280496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.280520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.280766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.281005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.281028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.281281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.281508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.281549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.281781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.282011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.282036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.282234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.282467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.282507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.282783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.283002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.283027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.283332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.283564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.283589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.283833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.284060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.284085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.284327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.284517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.284556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.284807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.285089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.285114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.285331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.285598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.285623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.285825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.286092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.286117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.286407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.286678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.286703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.286973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.287147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.287194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.287423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.287662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.287688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.287934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.288202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.288243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.288483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.288698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.288723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.288997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.289200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.289242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.289438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.289610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.289635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.612 qpair failed and we were unable to recover it. 00:21:00.612 [2024-04-18 13:50:03.289848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.290065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.612 [2024-04-18 13:50:03.290090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.290286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.290601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.290627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.290922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.291217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.291244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.291520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.291733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.291760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.292055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.292266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.292293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.292427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.292646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.292672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.292931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.293181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.293223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.293435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.293675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.293701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.293924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.294123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.294174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.294466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.294727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.294753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.295048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.295319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.295347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.295582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.295787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.295813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.296069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.296302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.296330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.296580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.296860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.296887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.297092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.297347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.297375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.297621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.297901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.297928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.298193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.298431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.298458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.298697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.298984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.299010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.299228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.299399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.299425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.299666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.299898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.299924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.300184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.300370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.300397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.300736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.300995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.301021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.301205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.301495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.301535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.301779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.302021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.302048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.302354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.302641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.302667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.302955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.303156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.303210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.303491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.303752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.303778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.613 [2024-04-18 13:50:03.304031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.304274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.613 [2024-04-18 13:50:03.304301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.613 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.304581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.304880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.304906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.305156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.305395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.305422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.305596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.305740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.305789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.306039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.306322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.306350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.306551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.306788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.306813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.307106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.307360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.307388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.307628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.307875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.307901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.308199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.308448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.308490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.308705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.308932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.308958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.309267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.309521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.309547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.309844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.310147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.310194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.310436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.310733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.310759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.311065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.311365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.311393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.311626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.311858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.311884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.312182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.312459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.312500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.312768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.313043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.313069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.313366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.313659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.313685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.313898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.314154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.314201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.314463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.314711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.314737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.315042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.315298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.315326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.315565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.315812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.315854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.316116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.316342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.316368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.316576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.316846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.316871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.317096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.317385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.317412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.317608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.317837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.317863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.318107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.318271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.318298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.614 qpair failed and we were unable to recover it. 00:21:00.614 [2024-04-18 13:50:03.318434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.614 [2024-04-18 13:50:03.318694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.318720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.318970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.319202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.319230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.319516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.319755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.319781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.320074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.320327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.320354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.320603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.320792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.320819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.321082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.321368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.321395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.321639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.321780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.321805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.322106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.322410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.322438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.322751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.323008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.323034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.323289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.323545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.323571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.323831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.324115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.324142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.324385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.324558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.324587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.324854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.325138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.325164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.325478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.325771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.325797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.326029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.326216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.326243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.326445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.326689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.326715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.326943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.327240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.327268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.327500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.327790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.327816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.328027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.328227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.328255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.328446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.328639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.328665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.328852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.329035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.329061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.329306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.329510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.329540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.329774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.329950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.329973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.330222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.330505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.330532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.330826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.331082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.331108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.331403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.331639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.331665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.331962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.332212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.615 [2024-04-18 13:50:03.332239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.615 qpair failed and we were unable to recover it. 00:21:00.615 [2024-04-18 13:50:03.332495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.332666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.332691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.332948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.333169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.333216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.333447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.333624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.333649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.333902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.334196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.334224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.334473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.334706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.334735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.334951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.335217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.335244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.335434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.335667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.335692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.335959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.336188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.336215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.336388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.336606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.336630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.336816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.336998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.337023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.337250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.337479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.337521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.337737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.337900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.337924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.338202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.338449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.338489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.338732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.338989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.339015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.339252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.339541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.339571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.339815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.340017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.340042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.340271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.340509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.340535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.340828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.341118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.341144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.341393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.341683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.341709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.341954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.342089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.342115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.342324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.342516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.342541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.342832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.343096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.343122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.616 qpair failed and we were unable to recover it. 00:21:00.616 [2024-04-18 13:50:03.343360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.343500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.616 [2024-04-18 13:50:03.343525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.343734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.343893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.343919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.344125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.344283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.344309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.344498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.344690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.344715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.344892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.345068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.345093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.345285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.345472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.345498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.345739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.346021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.346046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.346254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.346408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.346434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.346566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.346726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.346752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.347007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.347285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.347312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.347480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.347693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.347718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.347855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.348015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.348041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.348238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.348397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.348423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.348599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.348758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.348785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.348980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.349145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.349169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.349318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.349509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.349550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.349709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.349870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.349896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.350103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.350247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.350274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.350412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.350564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.350604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.350803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.350969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.350994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.351141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.351312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.351339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.351501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.351680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.351705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.351876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.352068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.352093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.352248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.352410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.352436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.352593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.352718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.352745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.352926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.353083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.353109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.617 [2024-04-18 13:50:03.353295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.353479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.617 [2024-04-18 13:50:03.353505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.617 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.353705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.353832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.353872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.354067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.354192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.354219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.354369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.354496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.354535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.354713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.354917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.354942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.355097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.355259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.355286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.355439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.355588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.355629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.355791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.355913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.355939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.356102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.356294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.356321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.356473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.356624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.356650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.356813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.356968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.356994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.357149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.357311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.357351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.357523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.357682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.357722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.357913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.358062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.358103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.358277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.358426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.358453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.358581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.358710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.358736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.358912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.359068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.359093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.359279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.359427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.359453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.359651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.359768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.359794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.359913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.360088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.360114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.360312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.360464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.360490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.360662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.360807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.360833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.360981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.361129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.361155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.361327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.361473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.361499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.361642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.361789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.361830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.362007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.362199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.362238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.618 [2024-04-18 13:50:03.362356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.362528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.618 [2024-04-18 13:50:03.362553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.618 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.362734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.362915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.362956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.363150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.363285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.363311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.363460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.363581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.363607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.363783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.363963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.363988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.364145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.364293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.364320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.364461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.364575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.364601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.364794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.364920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.364945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.365064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.365181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.365208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.365380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.365552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.365577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.365732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.365855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.365881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.366060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.366256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.366283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.366431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.366576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.366602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.366765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.366905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.366931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.367038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.367187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.367214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.367401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.367513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.367539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.367676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.367859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.367898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.368056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.368196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.368223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.368364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.368488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.368514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.368662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.368774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.368799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.368909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.369055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.369081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.369226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.369429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.369470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.369626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.369798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.369823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.369968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.370138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.370164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.370342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.370487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.370527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.370691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.370835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.370861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.619 [2024-04-18 13:50:03.371056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.371207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.619 [2024-04-18 13:50:03.371245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.619 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.371399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.371547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.371572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.371735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.371878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.371904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.372012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.372187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.372214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.372359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.372543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.372568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.372725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.372898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.372938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.373096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.373208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.373234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.373344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.373450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.373476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.373645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.373813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.373838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.374024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.374162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.374194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.374368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.374514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.374555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.374707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.374847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.374873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.375032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.375173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.375205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.375347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.375518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.375559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.375718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.375858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.375884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.376012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.376125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.376151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.376297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.376465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.376491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.376658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.376800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.376826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.376993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.377142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.377200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.377358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.377470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.377496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.377676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.377820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.377845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.377999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.378110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.378136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.378254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.378392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.378418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.378556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.378697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.378737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.378920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.379086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.379127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.379260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.379392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.379418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.379590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.379736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.379776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.379961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.380073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.380099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.380291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.380409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.380435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.380576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.380721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.380747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.380917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.381070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.381096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.381270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.381380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.381406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.381549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.381684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.381710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.381852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.381988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.382014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.382136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.382279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.382305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.382418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.382534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.382560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.382713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.382887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.382913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.383065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.383184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.383211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.383379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.383561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.383586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.383739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.383881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.383907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.384095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.384237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.620 [2024-04-18 13:50:03.384265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.620 qpair failed and we were unable to recover it. 00:21:00.620 [2024-04-18 13:50:03.384425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.903 [2024-04-18 13:50:03.384577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.903 [2024-04-18 13:50:03.384603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.903 qpair failed and we were unable to recover it. 00:21:00.903 [2024-04-18 13:50:03.384747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.903 [2024-04-18 13:50:03.384863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.903 [2024-04-18 13:50:03.384889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.903 qpair failed and we were unable to recover it. 00:21:00.903 [2024-04-18 13:50:03.385031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.903 [2024-04-18 13:50:03.385144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.903 [2024-04-18 13:50:03.385170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.903 qpair failed and we were unable to recover it. 00:21:00.903 [2024-04-18 13:50:03.385334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.903 [2024-04-18 13:50:03.385488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.903 [2024-04-18 13:50:03.385514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.903 qpair failed and we were unable to recover it. 00:21:00.903 [2024-04-18 13:50:03.385680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.903 [2024-04-18 13:50:03.385827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.903 [2024-04-18 13:50:03.385856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.903 qpair failed and we were unable to recover it. 00:21:00.903 [2024-04-18 13:50:03.386079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.903 [2024-04-18 13:50:03.386263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.903 [2024-04-18 13:50:03.386290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.903 qpair failed and we were unable to recover it. 00:21:00.903 [2024-04-18 13:50:03.386460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.903 [2024-04-18 13:50:03.386698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.903 [2024-04-18 13:50:03.386725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.903 qpair failed and we were unable to recover it. 00:21:00.903 [2024-04-18 13:50:03.387006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.903 [2024-04-18 13:50:03.387246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.387273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.387417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.387627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.387654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.387890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.388118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.388144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.388283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.388425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.388451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.388668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.388886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.388913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.389111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.389290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.389317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.389461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.389708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.389734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.389968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.390240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.390271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.390437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.390713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.390739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.391014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.391248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.391274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.391394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.391537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.391563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.391833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.392062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.392087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.392290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.392408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.392433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.392631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.392802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.392827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.393061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.393294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.393321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.393468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.393646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.393672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.393933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.394135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.394181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.394374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.394551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.394596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.394733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.394872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.394898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.395142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.395312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.395338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.395574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.395796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.395821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.904 [2024-04-18 13:50:03.396017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.396156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.904 [2024-04-18 13:50:03.396200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.904 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.396362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.396500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.396526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.396755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.396939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.396964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.397248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.397389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.397415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.397675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.397901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.397927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.398203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.398327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.398353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.398521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.398720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.398751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.398981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.399212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.399246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.399641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.399861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.399887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.400088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.400277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.400303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.400450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.400644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.400669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.400866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.401097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.401123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.401323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.401486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.401512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.401723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.401939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.401965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.402163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.402363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.402390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.402632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.402787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.402813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.403011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.403188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.403229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.403404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.403636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.403662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.403944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.404212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.404241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.404451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.404631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.404668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.404847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.405097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.405123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.405379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.405510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.905 [2024-04-18 13:50:03.405542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.905 qpair failed and we were unable to recover it. 00:21:00.905 [2024-04-18 13:50:03.405777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.405954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.405979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.406212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.406390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.406416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.406553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.406720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.406761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.406906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.407048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.407075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.407261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.407432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.407458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.407636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.407777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.407803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.408047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.408238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.408265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.408409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.408628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.408654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.408928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.409122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.409148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.409363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.409598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.409624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.409868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.410100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.410126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.410424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.410647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.410673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.410828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.410968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.410994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.411163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.411348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.411383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.411528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.411642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.411679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.411831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.411990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.412030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.412168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.412354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.412380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.412580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.412758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.412784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.412981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.413134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.413160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.413289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.413439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.413465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.413645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.413822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.413848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.413996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.414140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.414166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.414354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.414502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.414528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.414707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.414834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.414861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.906 [2024-04-18 13:50:03.415040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.415191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.906 [2024-04-18 13:50:03.415218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.906 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.415398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.415522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.415548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.415684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.415862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.415890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.416005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.416162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.416194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.416373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.416552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.416578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.416738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.416891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.416916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.417063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.417209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.417236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.417418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.417600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.417626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.417793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.418066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.418093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.418317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.418502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.418528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.418643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.418797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.418823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.419072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.419273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.419300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.419451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.419720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.419747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.419985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.420233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.420260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.420443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.420671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.420697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.420923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.421101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.421128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.421341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.421529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.421555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.421837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.422114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.422141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.422294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.422413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.422438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.422671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.422901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.422928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.423173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.423341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.423367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.423526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.423706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.423732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.907 [2024-04-18 13:50:03.423896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.424084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.907 [2024-04-18 13:50:03.424110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.907 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.424342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.424497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.424522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.424709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.424853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.424879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.425139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.425326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.425352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.425501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.425624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.425650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.425801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.425979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.426006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.426163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.426320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.426346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.426479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.426616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.426642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.426773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.426928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.426954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.427150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.427338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.427363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.427562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.427743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.427769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.427955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.428138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.428163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.428330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.428522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.428549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.428721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.428870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.428896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.429027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.429184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.429211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.429341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.429464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.429489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.429643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.429845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.429872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.430090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.430265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.430292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.430426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.430566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.430591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.430820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.431069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.431095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.431291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.431422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.431447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.431658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.431850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.431876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.908 qpair failed and we were unable to recover it. 00:21:00.908 [2024-04-18 13:50:03.432043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.908 [2024-04-18 13:50:03.432207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.432237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.432387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.432578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.432603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.432747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.432931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.432958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.433118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.433277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.433304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.433473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.433668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.433694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.433878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.434065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.434090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.434247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.434411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.434437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.434593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.434750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.434776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.434900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.435047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.435072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.435254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.435402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.435428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.435579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.435755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.435782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.435937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.436135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.436161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.436337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.436463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.436489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.436662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.436824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.436850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.437013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.437166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.437206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.437370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.437496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.437521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.437688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.437847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.437873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.437991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.438150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.438183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.438374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.438556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.438581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.909 qpair failed and we were unable to recover it. 00:21:00.909 [2024-04-18 13:50:03.438745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.909 [2024-04-18 13:50:03.438903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.438930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.439083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.439266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.439293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.439424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.439580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.439606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.439790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.439948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.439974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.440155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.440287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.440313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.440496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.440651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.440676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.440855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.441011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.441037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.441158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.441293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.441319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.441443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.441599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.441625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.441742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.441921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.441946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.442098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.442263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.442290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.442417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.442565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.442591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.442709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.442854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.442879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.443011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.443147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.443173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.443303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.443413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.443438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.443568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.443717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.443743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.443880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.444029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.444055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.444206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.444333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.444359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.444522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.444670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.444699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.444873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.444993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.445020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.445208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.445341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.445367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.445519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.445628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.445653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.445800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.445944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.445969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.446156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.446340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.446367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.446518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.446634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.446659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.910 [2024-04-18 13:50:03.446800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.446970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.910 [2024-04-18 13:50:03.446995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.910 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.447141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.447322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.447349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.447495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.447671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.447697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.447843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.448016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.448046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.448196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.448343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.448369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.448519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.448669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.448695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.448854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.448974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.449000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.449144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.449316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.449342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.449449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.449566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.449591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.449774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.449895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.449920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.450090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.450256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.450283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.450407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.450558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.450584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.450757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.450904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.450930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.451049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.451195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.451226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.451343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.451513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.451540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.451688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.451840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.451866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.452037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.452190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.452217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.452336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.452485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.452511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.452684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.452790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.452816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.452986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.453107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.453133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.453287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.453424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.453450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.453602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.453749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.453775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.453920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.454063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.454089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.454230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.454373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.454403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.454524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.454658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.454684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.454857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.454962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.454987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.911 qpair failed and we were unable to recover it. 00:21:00.911 [2024-04-18 13:50:03.455096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.455239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.911 [2024-04-18 13:50:03.455266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.455415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.455560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.455586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.455724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.455843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.455868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.455981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.456120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.456146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.456294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.456438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.456464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.456631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.456769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.456795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.456900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.457048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.457073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.457220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.457374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.457399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.457547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.457686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.457712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.457856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.458009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.458035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.458174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.458318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.458344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.458474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.458615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.458641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.458813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.458979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.459004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.459148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.459298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.459324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.459438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.459610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.459636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.459780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.459933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.459958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.460128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.460297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.460325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.460465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.460648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.460673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.460926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.461107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.461132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.461330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.461443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.461469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.461624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.461731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.461757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.461887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.462078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.462104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.462257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.462367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.462392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.462616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.462732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.462758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.462937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.463076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.463102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.463269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.463416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.912 [2024-04-18 13:50:03.463441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.912 qpair failed and we were unable to recover it. 00:21:00.912 [2024-04-18 13:50:03.463562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.463701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.463726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.463885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.464038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.464063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.464230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.464378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.464404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.464621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.464788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.464813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.464930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.465085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.465110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.465257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.465367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.465394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.465625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.465822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.465847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.466017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.466161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.466193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.466342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.466501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.466526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.466676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.466792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.466817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.466957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.467103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.467128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.467238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.467382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.467408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.467554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.467672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.467698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.467870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.467998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.468023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.468171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.468355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.468382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.468558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.468694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.468720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.468862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.468977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.469002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.469149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.469311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.469338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.469455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.469593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.469618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.469763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.469877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.469902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.470048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.470194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.470220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.470362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.470505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.470531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.470680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.470852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.470877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.913 [2024-04-18 13:50:03.471066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.471251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.913 [2024-04-18 13:50:03.471277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.913 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.471393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.471536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.471561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.471749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.471954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.471980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.472238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.472380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.472405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.472644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.472877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.472903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.473137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.473312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.473338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.473515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.473739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.473764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.473949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.474126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.474151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.474311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.474512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.474553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.474761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.474942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.474968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.475138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.475310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.475337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.475485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.475592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.475617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.475793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.476052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.476077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.476299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.476637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.476662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.476906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.477175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.477232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.477348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.477483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.477509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.477693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.477894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.477919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.478111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.478291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.478317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.478434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.478646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.478671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.478904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.479135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.479180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.479364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.479486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.479527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.479668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.479838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.479864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.914 qpair failed and we were unable to recover it. 00:21:00.914 [2024-04-18 13:50:03.480142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.480316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.914 [2024-04-18 13:50:03.480342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.480514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.480703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.480729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.480980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.481258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.481285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.481404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.481631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.481657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.481854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.482106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.482132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.482332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.482504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.482529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.482681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.482822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.482848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.483025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.483196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.483223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.483340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.483483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.483509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.483666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.483809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.483835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.483976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.484124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.484150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.484341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.484509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.484534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.484707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.484902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.484928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.485188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.485404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.485430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.485578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.485748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.485800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.486046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.486221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.486249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.486466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.486711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.486736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.486918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.487097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.487122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.487338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.487524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.487564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.487712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.487893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.487918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.488086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.488279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.488306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.488548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.488749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.488775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.488941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.489252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.489279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.489447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.489612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.489648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.489901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.490122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.490147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.915 [2024-04-18 13:50:03.490296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.490428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.915 [2024-04-18 13:50:03.490454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.915 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.490663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.490916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.490942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.491155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.491367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.491393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.491582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.491839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.491865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.492047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.492288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.492315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.492452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.492656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.492681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.492842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.493053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.493078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.493274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.493433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.493473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.493731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.493946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.493971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.494172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.494337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.494362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.494606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.494778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.494803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.495015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.495235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.495261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.495453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.495614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.495640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.495848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.496074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.496098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.496268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.496405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.496431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.496599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.496843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.496868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.497162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.497344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.497370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.497570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.497776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.497801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.498021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.498250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.498276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.498439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.498690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.498716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.498902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.499111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.499135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.499318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.499478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.499524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.499789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.500051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.500077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.500314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.500439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.500465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.500685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.500903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.500928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.501169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.501378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.501403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.916 qpair failed and we were unable to recover it. 00:21:00.916 [2024-04-18 13:50:03.501632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.916 [2024-04-18 13:50:03.501849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.501875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.502143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.502307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.502333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.502460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.502623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.502648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.502872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.503119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.503145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.503300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.503477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.503518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.503784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.503995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.504020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.504278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.504497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.504526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.504777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.505018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.505043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.505287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.505404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.505430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.505627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.505775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.505801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.505946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.506103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.506143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.506336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.506516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.506542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.506670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.506908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.506935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.507249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.507395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.507421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.507576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.507731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.507772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.507952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.508085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.508111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.508294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.508420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.508449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.508659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.508865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.508890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.509150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.509321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.509346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.509493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.509659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.509700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.509962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.510236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.510263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.510416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.510623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.510648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.510902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.511155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.511202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.511338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.511464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.511490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.511640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.511791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.511831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.512118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.512330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.917 [2024-04-18 13:50:03.512356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.917 qpair failed and we were unable to recover it. 00:21:00.917 [2024-04-18 13:50:03.512590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.512764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.512794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.512946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.513100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.513141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.513351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.513561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.513586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.513811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.514023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.514048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.514237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.514388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.514412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.514598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.514796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.514821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.515080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.515259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.515285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.515431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.515609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.515634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.515927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.516192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.516219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.516348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.516470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.516495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.516644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.516872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.516902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.517133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.517320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.517346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.517542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.517770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.517796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.518022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.518271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.518298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.518446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.518623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.518648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.518876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.519138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.519164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.519321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.519465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.519490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.519697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.519905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.519931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.520113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.520273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.520300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.520454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.520709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.520735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.521002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.521239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.521266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.521419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.521668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.521694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.521950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.522170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.522219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.522395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.522665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.522692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.522968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.523244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.523271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.918 qpair failed and we were unable to recover it. 00:21:00.918 [2024-04-18 13:50:03.523420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.523595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.918 [2024-04-18 13:50:03.523639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.523877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.524084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.524109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.524302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.524454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.524480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.524735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.524982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.525007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.525243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.525398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.525423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.525542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.525659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.525685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.525905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.526197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.526226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.526374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.526515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.526541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.526776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.527042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.527068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.527246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.527401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.527427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.527662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.527887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.527913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.528150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.528327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.528353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.528553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.528723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.528749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.529014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.529244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.529271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.529390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.529533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.529559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.529799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.529976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.530001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.530273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.530432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.530458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.530642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.530898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.530924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.531142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.531319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.531345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.531574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.531761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.531797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.919 qpair failed and we were unable to recover it. 00:21:00.919 [2024-04-18 13:50:03.531981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.532198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.919 [2024-04-18 13:50:03.532240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.920 qpair failed and we were unable to recover it. 00:21:00.920 [2024-04-18 13:50:03.532452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.920 [2024-04-18 13:50:03.532699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.920 [2024-04-18 13:50:03.532725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.920 qpair failed and we were unable to recover it. 00:21:00.920 [2024-04-18 13:50:03.533023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.920 [2024-04-18 13:50:03.533270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.920 [2024-04-18 13:50:03.533298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.920 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.533518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.533726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.533751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.533939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.534166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.534306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6448000b90 with addr=10.0.0.2, port=4420 00:21:00.921 [2024-04-18 13:50:03.534579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.534820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.534849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.535116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.535348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.535376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.535529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.535680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.535719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.535929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.536136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.536183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.536467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.536693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.536718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.536970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.537125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.537150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.537434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.537659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.537683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.537902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.538150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.538196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.538431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.538612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.538637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.538875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.539097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.539122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.539308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.539508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.539533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.539784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.540021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.540046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.540207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.540370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.540395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.540573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.540823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.540848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.541077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.541309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.541336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.541528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.541674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.541698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.541855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.542028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.542054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.542182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.542424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.542449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.542753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.542964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.542997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.543238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.543511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.543536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.543740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.543980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.544004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.544256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.544410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.544436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.544611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.544790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.544821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.921 qpair failed and we were unable to recover it. 00:21:00.921 [2024-04-18 13:50:03.545062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.545215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.921 [2024-04-18 13:50:03.545257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.545466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.545721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.545746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.545984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.546225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.546251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.546452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.546696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.546721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.546901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.547044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.547096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.547252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.547444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.547469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.547696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.547906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.547931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.548150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.548387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.548413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.548669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.548874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.548904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.549138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.549395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.549422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.549677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.549935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.549960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.550192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.550402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.550427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.550576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.550797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.550822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.551035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.551206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.551232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.551455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.551713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.551738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.552008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.552220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.552246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.552385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.552572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.552607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.552743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.552931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.552970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.553155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.553410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.553440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.553616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.553849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.553874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.554085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.554262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.554297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.554491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.554652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.554677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.554906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.555131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.555171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.555401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.555625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.555651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.555921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.556185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.556212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.556466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.556728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.556754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.922 qpair failed and we were unable to recover it. 00:21:00.922 [2024-04-18 13:50:03.557052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.922 [2024-04-18 13:50:03.557277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.557304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.557467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.557671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.557697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.557902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.558072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.558097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.558339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.558454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.558494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.558681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.558893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.558918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.559121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.559275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.559301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.559435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.559652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.559677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.559954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.560192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.560219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.560474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.560720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.560745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.561063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.561266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.561294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.561520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.561634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.561658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.561846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.562090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.562114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.562271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.562515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.562540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.562804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.563072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.563098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.563393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.563633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.563658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.563815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.564025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.564050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.564329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.564522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.564548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.564833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.565094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.565120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.565415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.565676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.565702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.565932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.566172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.566226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.566449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.566708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.566734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.567014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.567233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.567259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.567435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.567671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.567697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.567938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.568193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.568220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.568420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.568626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.568663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.568889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.569048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.923 [2024-04-18 13:50:03.569072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.923 qpair failed and we were unable to recover it. 00:21:00.923 [2024-04-18 13:50:03.569321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.569549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.569574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.569752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.569928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.569953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.570211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.570499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.570524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.570813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.571097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.571123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.571384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.571582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.571606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.571784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.572059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.572085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.572308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.572495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.572519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.572803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.573078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.573103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.573381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.573652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.573678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.573938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.574113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.574138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.574291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.574528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.574554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.574749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.574968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.574994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.575215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.575438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.575479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.575747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.575946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.575970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.576255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.576500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.576524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.576707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.576970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.576995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.577237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.577397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.577423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.577630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.577790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.577818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.578011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.578153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.578198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.578421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.578691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.578716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.578961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.579208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.579235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.579508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.579705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.579740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.579981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.580208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.580234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.580414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.580690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.580715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.580934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.581096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.581120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.924 [2024-04-18 13:50:03.581391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.581557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.924 [2024-04-18 13:50:03.581582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.924 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.581862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.582169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.582224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.582439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.582569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.582593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.582843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.583071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.583097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.583239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.583495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.583521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.583795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.583992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.584017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.584268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.584511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.584537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.584810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.585040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.585065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.585358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.585607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.585632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.585897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.586102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.586128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.586370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.586575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.586601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.586780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.587001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.587026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.587301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.587523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.587548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.587804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.588051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.588076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.588303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.588541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.588566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.588843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.589128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.589153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.589440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.589706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.589732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.589997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.590254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.590281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.590550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.590753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.590778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.590977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.591223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.591250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.591527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.591811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.591837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.592076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.592325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.592352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.592574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.592748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.592786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.925 [2024-04-18 13:50:03.593052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.593275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.925 [2024-04-18 13:50:03.593302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.925 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.593516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.593752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.593777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.594013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.594300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.594327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.594566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.594850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.594875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.595127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.595342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.595369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.595645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.595857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.595881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.596040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.596183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.596209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.596486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.596768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.596793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.597033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.597326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.597354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.597643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.597903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.597929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.598187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.598437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.598464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.598660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.598869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.598894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.599106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.599252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.599278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.599574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.599867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.599893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.600153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.600449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.600476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.600729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.600973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.600997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.601185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.601429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.601456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.601696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.601929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.601954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.602165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.602386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.602412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.602601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.602862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.602887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.603190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.603488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.603517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.603757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.603962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.603987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.604137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.604299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.604325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.604558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.604793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.604817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.605035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.605289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.605315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.926 qpair failed and we were unable to recover it. 00:21:00.926 [2024-04-18 13:50:03.605555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.926 [2024-04-18 13:50:03.605810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.605835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.606137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.606414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.606441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.606679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.606928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.606953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.607196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.607437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.607477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.607685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.607904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.607930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.608229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.608439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.608465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.608777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.609013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.609039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.609301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.609453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.609493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.609752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.610042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.610067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.610370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.610614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.610640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.610868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.611065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.611101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.611363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.611580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.611604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.611770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.611973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.611997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.612226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.612470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.612510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.612720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.612995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.613020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.613331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.613613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.613638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.613899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.614215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.614242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.614455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.614645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.614670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.614800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.614981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.615006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.615231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.615381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.615407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.615602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.615856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.615882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.616054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.616220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.616246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.616490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.616727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.616752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.617044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.617284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.617312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.617548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.617791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.617816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.927 [2024-04-18 13:50:03.618119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.618378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.927 [2024-04-18 13:50:03.618405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.927 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.618621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.618872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.618897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.619145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.619340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.619366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.619513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.619626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.619651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.619797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.620106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.620131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.620406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.620553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.620578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.620868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.621119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.621144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.621412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.621686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.621712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.622023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.622285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.622312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.622589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.622835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.622860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.623083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.623327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.623354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.623583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.623742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.623767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.623892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.624070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.624094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.624362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.624598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.624623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.624835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.625060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.625085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.625271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.625536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.625560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.625751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.625969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.625993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.626190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.626450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.626491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.626711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.626868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.626892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.627026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.627172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.627201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.627415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.627662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.627687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.627978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.628223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.628254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.628497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.628778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.628803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.628990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.629137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.629182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.629460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.629691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.629716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.630008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.630261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.630289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.928 qpair failed and we were unable to recover it. 00:21:00.928 [2024-04-18 13:50:03.630537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.928 [2024-04-18 13:50:03.630684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.630708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.630872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.631013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.631038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.631304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.631442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.631467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.631682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.631890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.631915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.632068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.632318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.632345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.632634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.632845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.632870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.633139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.633360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.633387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.633508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.633695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.633735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.633930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.634224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.634251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.634529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.634755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.634780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.635021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.635306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.635333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.635587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.635878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.635904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.636121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.636340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.636365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.636515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.636629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.636654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.636801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.636946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.636971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.637231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.637461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.637501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.637763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.637975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.638000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.638157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.638293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.638319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.638535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.638728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.638754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.639054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.639289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.639316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.639455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.639572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.639597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.639862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.640151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.640183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.640425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.640567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.640591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.640824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.641058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.641084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.641312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.641566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.641591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.641779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.642065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.929 [2024-04-18 13:50:03.642091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.929 qpair failed and we were unable to recover it. 00:21:00.929 [2024-04-18 13:50:03.642345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.642605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.642630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.642840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.643083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.643109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.643360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.643533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.643584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.643831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.643963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.643988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.644160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.644303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.644343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.644608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.644897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.644922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.645221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.645443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.645484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.645742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.646025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.646051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.646314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.646603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.646629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.646890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.647019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.647043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.647209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.647487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.647512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.647663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.647891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.647917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.648161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.648450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.648491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.648780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.649058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.649084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.649309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.649529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.649555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.649853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.650165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.650210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.650452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.650672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.650697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.650904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.651016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.651055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.651210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.651441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.651482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.651785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.651945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.651970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.652166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.652392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.652422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.652715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.652926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.652952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.653248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.653493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.653518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.653678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.653972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.653997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.654310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.654591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.654617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.654925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.655134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.655160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.655423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.655657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.655682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.930 qpair failed and we were unable to recover it. 00:21:00.930 [2024-04-18 13:50:03.655975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.930 [2024-04-18 13:50:03.656232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.656259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.656495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.656735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.656760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.657046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.657264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.657301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.657456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.657751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.657780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.658041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.658321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.658348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.658581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.658866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.658891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.659169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.659405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.659430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.659604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.659856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.659882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.660173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.660467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.660507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.660801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.661084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.661110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.661366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.661595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.661620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.661885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.662104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.662130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.662445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.662716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.662742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.663032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.663270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.663298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.663596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.663912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.663937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.664081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.664254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.664281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.664503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.664789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.664815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.665109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.665384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.665411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.665653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.665905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.665931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.666228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.666471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.666498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.666714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.666939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.666965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.667197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.667388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.667415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.667596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.667862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.667888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.668053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.668244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.668270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.668416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.668600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.668626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.668796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.668956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.668981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.669144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.669284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.931 [2024-04-18 13:50:03.669311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.931 qpair failed and we were unable to recover it. 00:21:00.931 [2024-04-18 13:50:03.669471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.669622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.669647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.669782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.669962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.669987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.670155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.670314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.670340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.670504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.670668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.670694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.670846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.671010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.671036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.671200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.671368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.671394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.671512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.671695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.671720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.671877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.672040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.672066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.672238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.672425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.672451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.672635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.672819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.672845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.673006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.673196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.673223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.673383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.673561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.673586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.673768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.673894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.673919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.674077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.674279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.674305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.674488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.674674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.674699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.674858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.675017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.675043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.675200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.675325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.675351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.675476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.675662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.675687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.675850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.676044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.676069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.676201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.676388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.676413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.676567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.676725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.676750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.676907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.677028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.677053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.677187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.677368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.677393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.677550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.677701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.677727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.677907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.678090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.678115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.678286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.678469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.678494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.678684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.678803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.678828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.678967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.679110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.679139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.932 [2024-04-18 13:50:03.679279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.679434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.932 [2024-04-18 13:50:03.679459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.932 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.679582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.679739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.679764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.679911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.680090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.680116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.680237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.680397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.680423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.680607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.680724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.680749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.680860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.680972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.680997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.681187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.681365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.681391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.681542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.681694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.681719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.681874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.682055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.682080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.682227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.682378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.682404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.682568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.682718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.682744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.682875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.683062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.683087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.683267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.683425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.683451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.683628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.683782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.683807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.683928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.684081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.684106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.684245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.684376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.684401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.684591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.684720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.684745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.684868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.685053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.933 [2024-04-18 13:50:03.685079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:00.933 qpair failed and we were unable to recover it. 00:21:00.933 [2024-04-18 13:50:03.685214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.685394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.685420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:01.213 qpair failed and we were unable to recover it. 00:21:01.213 [2024-04-18 13:50:03.685543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.685692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.685717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:01.213 qpair failed and we were unable to recover it. 00:21:01.213 [2024-04-18 13:50:03.685876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.686054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.686080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:01.213 qpair failed and we were unable to recover it. 00:21:01.213 [2024-04-18 13:50:03.686234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.686364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.686390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:01.213 qpair failed and we were unable to recover it. 00:21:01.213 [2024-04-18 13:50:03.686577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.686721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.686746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:01.213 qpair failed and we were unable to recover it. 00:21:01.213 [2024-04-18 13:50:03.686927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.687060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.687085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:01.213 qpair failed and we were unable to recover it. 00:21:01.213 [2024-04-18 13:50:03.687256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.687372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.687397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:01.213 qpair failed and we were unable to recover it. 00:21:01.213 [2024-04-18 13:50:03.687519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.687640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.687666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:01.213 qpair failed and we were unable to recover it. 00:21:01.213 [2024-04-18 13:50:03.687853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.688000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.688025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:01.213 qpair failed and we were unable to recover it. 00:21:01.213 [2024-04-18 13:50:03.688186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.213 [2024-04-18 13:50:03.688339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.688365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.688525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.688631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.688657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.688803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.688956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.688981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185fb30 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 A controller has encountered a failure and is being reset. 00:21:01.214 [2024-04-18 13:50:03.689193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.689399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.689428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.689586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.689707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.689733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.689885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.690061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.690086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.690235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.690382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.690408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.690561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.690709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.690736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.690926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.691091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.691116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.691273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.691417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.691442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.691596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.691758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.691784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.691908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.692057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.692083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.692240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.692390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.692416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.692537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.692676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.692701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.692853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.693006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.693032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.693191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.693371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.693400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.693545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.693715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.693740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.693915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.694057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.694084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.694238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.694402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.694430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.694604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.694740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.694766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.694940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.695056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.695082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.695230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.695352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.695379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.695521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.695673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.695699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.695851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.695998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.696024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.696161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.696313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.696339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.696486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.696666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.696692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.696850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.697025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.697052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.214 [2024-04-18 13:50:03.697213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.697388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.214 [2024-04-18 13:50:03.697414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.214 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.697587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.697699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.697727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.697875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.698020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.698045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.698201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.698374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.698401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.698520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.698663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.698690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.698865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.699018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.699044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.699219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.699364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.699391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.699543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.699695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.699722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.699849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.699996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.700022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.700163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.700329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.700355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.700525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.700670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.700696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.700871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.701044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.701069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.701248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.701389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.701415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.701560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.701712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.701738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.701914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.702063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.702089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.702264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.702409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.702434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.702609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.702747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.702773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.702892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.703063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.703089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.703278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.703397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.703423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.703571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.703742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.703768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.703938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.704053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.704079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.704223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.704339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.704364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.704510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.704631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.704657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.704823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.704993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.705020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.705163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.705336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.705362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.705498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.705639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.705669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.705816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.705989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.706015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.706194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.706371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.706397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.706516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.706656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.706681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.215 qpair failed and we were unable to recover it. 00:21:01.215 [2024-04-18 13:50:03.706821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.215 [2024-04-18 13:50:03.706937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.706962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.707112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.707272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.707299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.707470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.707642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.707668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.707817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.707928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.707954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.708092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.708234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.708261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.708407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.708548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.708574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.708741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.708912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.708938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.709059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.709230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.709256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.709427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.709571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.709598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.709768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.709888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.709913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.710033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.710187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.710215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.710384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.710525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.710550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.710690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.710830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.710855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.710977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.711111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.711138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.711300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.711467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.711492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.711610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.711755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.711780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.711952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.712093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.712119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.712288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.712428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.712461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.712576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.712753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.712778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.712923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.713069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.713097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.713267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.713404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.713430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.713584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.713724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.713748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.713926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.714069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.714097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.714244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.714396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.714422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.714568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.714709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.714736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.714905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.715050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.715075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.715252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.715419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.715446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.216 qpair failed and we were unable to recover it. 00:21:01.216 [2024-04-18 13:50:03.715563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.216 [2024-04-18 13:50:03.715678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.715708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.715883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.716027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.716053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.716195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.716314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.716340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.716487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.716631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.716657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.716772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.716917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.716942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.717063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.717201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.717226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.717369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.717537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.717563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.717679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.717828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.717855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.717969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.718117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.718143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.718296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.718443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.718468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.718586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.718722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.718752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.718875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.719017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.719042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.719210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.719379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.719404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.719548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.719661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.719686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.719861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.719998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.720025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.720169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.720346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.720372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.720524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.720668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.720693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.720869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.721043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.721069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.721222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.721371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.721398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.721546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.721660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.721685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.721857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.721970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.721999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.722145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.722294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.722319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.722460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.722623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.722649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.722823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.722968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.722994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.723191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.723361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.723388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.723521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.723639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.723664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.723805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.723947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.723973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.217 [2024-04-18 13:50:03.724114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.724241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.217 [2024-04-18 13:50:03.724268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.217 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.724433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.724597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.724622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.724765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.724884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.724909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.725055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.725200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.725227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.725410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.725560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.725585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.725742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.725909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.725949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.726123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.726269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.726295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.726411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.726553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.726579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.726725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.726894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.726934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.727065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.727205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.727231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.727400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.727540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.727581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.727762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.727931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.727970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.728163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.728309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.728335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.728458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.728597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.728623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.728774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.728908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.728934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.729076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.729220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.729246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.729386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.729535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.729575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.729731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.729899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.729940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.730080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.730229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.730256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.730397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.730541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.730566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.730725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.730861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.730886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.731000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.731139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.731165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.731315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.731455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.731481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.218 qpair failed and we were unable to recover it. 00:21:01.218 [2024-04-18 13:50:03.731603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.731748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.218 [2024-04-18 13:50:03.731773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.731901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.732017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.732042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.732189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.732332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.732356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.732498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.732611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.732636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.732789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.732904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.732928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.733073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.733218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.733245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.733361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.733505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.733531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.733677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.733842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.733882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.734041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.734188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.734213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.734387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.734539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.734564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.734722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.734862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.734887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.735045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.735214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.735239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.735387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.735509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.735534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.735719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.735838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.735864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.735994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.736145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.736171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.736318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.736483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.736522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.736664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.736802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.736827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.736968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.737119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.737145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.737336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.737500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.737524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.737655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.737768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.737794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.737966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.738106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.738146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.738343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.738525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.738550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.738707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.738849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.738875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.739039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.739147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.739173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.739320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.739475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.739499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.739685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.739824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.739849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.740016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.740130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.740155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.219 qpair failed and we were unable to recover it. 00:21:01.219 [2024-04-18 13:50:03.740282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.219 [2024-04-18 13:50:03.740401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.740427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.740595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.740738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.740777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.740934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.741079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.741104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.741263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.741402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.741428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.741576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.741721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.741747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.741915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.742060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.742085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.742224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.742372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.742397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.742545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.742687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.742713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.742882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.743000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.743026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.743172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.743325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.743350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.743495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.743636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.743675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.743811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.743955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.743980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.744135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.744281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.744306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.744449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.744565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.744591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.744742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.744890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.744916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.745076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.745222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.745247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.745394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.745547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.745572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.745732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.745874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.745900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.746041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.746188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.746215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.746361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.746476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.746501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.746672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.746782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.746807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.746969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.747114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.747140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.747289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.747402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.747428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.747607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.747745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.747786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.747924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.748070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.748096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.748265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.748409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.748435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.748579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.748726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.220 [2024-04-18 13:50:03.748765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.220 qpair failed and we were unable to recover it. 00:21:01.220 [2024-04-18 13:50:03.748957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.749098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.749138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.749289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.749415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.749441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.749614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.749753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.749793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.749920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.750085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.750111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.750281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.750422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.750448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.750596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.750763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.750789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.750961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.751141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.751165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.751405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.751542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.751567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.751773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.751982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.752006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.752203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.752399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.752424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.752615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.752802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.752827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.753025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.753163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.753208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.753456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.753624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.753647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.753807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.753950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.753975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.754153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.754294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.754321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.754463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.754585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.754611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.754830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.754972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.754996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.755185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.755360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.755386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.755553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.755737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.755761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.755919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.756056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.756080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.756223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.756375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.756401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.756633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.756806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.756830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.756989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.757134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.757160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.757326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.757566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.757592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.757757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.757926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.757966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.758120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.758277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.758302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.758486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.758653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.758678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.758905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.759084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.759109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.221 qpair failed and we were unable to recover it. 00:21:01.221 [2024-04-18 13:50:03.759328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.221 [2024-04-18 13:50:03.759464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.759504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.759669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.759810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.759835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.760011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.760153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.760184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.760352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.760516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.760541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.760704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.760873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.760912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.761072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.761191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.761226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.761367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.761502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.761527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.761669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.761828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.761867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.762048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.762194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.762221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.762361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.762510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.762536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.762700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.762827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.762852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.762970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.763112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.763137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.763286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.763456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.763495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.763658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.763803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.763829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.764020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.764129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.764154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.764285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.764428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.764453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.764574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.764746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.764770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.764925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.765066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.765092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.765229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.765339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.765365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.765509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.765684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.765712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.765874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.766068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.766092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.766255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.766391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.766417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.766538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.766678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.766702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.766870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.767017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.767057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.767249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.767393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.767418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.767557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.767703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.767727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.767890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.768007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.768032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.768141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.768258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.768283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.222 qpair failed and we were unable to recover it. 00:21:01.222 [2024-04-18 13:50:03.768427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.222 [2024-04-18 13:50:03.768551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.768577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.768737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.768853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.768883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.769027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.769171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.769212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.769359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.769476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.769500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.769654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.769802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.769827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.769973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.770111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.770137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.770329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.770495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.770521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.770683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.770794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.770819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.770965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.771106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.771131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.771315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.771482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.771508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.771663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.771806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.771831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.771999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.772137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.772189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.772322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.772485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.772510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.772656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.772798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.772824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.772994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.773113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.773138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.773285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.773426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.773450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.773622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.773737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.773762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.773922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.774088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.774114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.774281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.774425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.774449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.774595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.774767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.774792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.774933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.775074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.775100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.775289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.775429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.775459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.775607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.775743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.775768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.223 qpair failed and we were unable to recover it. 00:21:01.223 [2024-04-18 13:50:03.775916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.223 [2024-04-18 13:50:03.776053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.776078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.776198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.776314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.776340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.776484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.776599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.776624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.776794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.776933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.776971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.777098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.777215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.777242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.777399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.777544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.777583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.777733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.777905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.777930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.778096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.778221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.778247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.778410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.778578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.778603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.778797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.778972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.778997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.779153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.779331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.779357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.779507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.779676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.779716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.779876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.780038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.780079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.780220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.780330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.780356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.780497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.780651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.780675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.780832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.780998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.781022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.781135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.781285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.781311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.781484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.781594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.781619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.781809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.781987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.782026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.782187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.782330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.782355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.782498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.782681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.782706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.782861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.782994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.783020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.783143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.783293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.783318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.783434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.783579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.783603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.783749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.783885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.783911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.784075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.784217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.784244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.784362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.784508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.784534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.784648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.784765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.784789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.784932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.785071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.224 [2024-04-18 13:50:03.785097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.224 qpair failed and we were unable to recover it. 00:21:01.224 [2024-04-18 13:50:03.785289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.785433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.785458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.785627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.785735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.785760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.785926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.786031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.786056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.786248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.786367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.786391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.786565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.786718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.786743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.786874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.786985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.787011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.787182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.787352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.787377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.787517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.787666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.787705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.787860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.787992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.788017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.788153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.788327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.788352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.788504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.788643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.788669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.788830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.788985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.789011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.789131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.789279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.789306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.789454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.789622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.789661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.789817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.789931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.789956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.790125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.790294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.790321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.790438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.790556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.790581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.790750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.790928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.790952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.791083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.791210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.791237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.791370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.791554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.791578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.791715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.791832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.791856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.791992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.792159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.792188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.792305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.792425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.792450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.792594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.792709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.792733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.792907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.793047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.793086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.793239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.793392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.793417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.793560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.793670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.793695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.793836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.793974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.793999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.225 qpair failed and we were unable to recover it. 00:21:01.225 [2024-04-18 13:50:03.794166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.225 [2024-04-18 13:50:03.794313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.794338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.794482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.794647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.794687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.794848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.795013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.795038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.795203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.795320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.795345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.795503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.795641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.795666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.795805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.795972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.795997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.796137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.796264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.796289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.796434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.796574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.796614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.796775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.796946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.796985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.797125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.797266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.797291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.797429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.797577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.797602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.797767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.797907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.797932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.798048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.798152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.798182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.798327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.798496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.798536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.798688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.798831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.798857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.799003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.799146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.799172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.799326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.799467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.799491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.799635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.799815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.799854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.800017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.800159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.800200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.800349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.800491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.800530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.800689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.800808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.800833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.801000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.801144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.801168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.801301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.801448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.801473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.801640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.801807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.801832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.801989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.802136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.802160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.802317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.802455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.802480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.802620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.802748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.802773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.802917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.803054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.226 [2024-04-18 13:50:03.803079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.226 qpair failed and we were unable to recover it. 00:21:01.226 [2024-04-18 13:50:03.803240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.803386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.803411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.803554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.803697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.803723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.803898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.804063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.804090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.804235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.804340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.804365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.804510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.804656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.804681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.804840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.804980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.805006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.805146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.805304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.805330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.805470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.805596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.805621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.805779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.805926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.805951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.806094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.806241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.806267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.806375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.806534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.806559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.806729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.806896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.806921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.807114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.807256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.807282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.807426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.807564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.807589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.807753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.807926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.807952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.808069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.808219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.808246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.808415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.808535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.808560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.808723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.808893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.808919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.809076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.809196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.809224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.809394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.809509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.809547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.809733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.809880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.809920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.227 qpair failed and we were unable to recover it. 00:21:01.227 [2024-04-18 13:50:03.810048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.810217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.227 [2024-04-18 13:50:03.810244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.810415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.810556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.810594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.810778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.810966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.810991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.811119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.811235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.811261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.811437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.811573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.811612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.811738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.811923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.811949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.812112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.812246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.812272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.812415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.812561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.812602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.812758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.812978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.813012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.813167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.813325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.813350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.813528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.813675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.813700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.813890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.814009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.814034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.814238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.814381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.814406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.814554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.814764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.814792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.814955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.815151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.815184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.815375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.815603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.815628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.815808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.816008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.816033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.816232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.816412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.816437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.816590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.816737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.816763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.816900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.817064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.817089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.817269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.817412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.817436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.817560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.817786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.817822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.817952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.818123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.818149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.818308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.818458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.818487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.818640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.818920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.818944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.819078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.819201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.819226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.819367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.819548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.819572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.819729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.819872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.819899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.228 qpair failed and we were unable to recover it. 00:21:01.228 [2024-04-18 13:50:03.820061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.228 [2024-04-18 13:50:03.820231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.820256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.820401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.820548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.820573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.820795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.820981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.821006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.821188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.821352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.821377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.821500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.821653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.821678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.821860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.822042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.822070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.822284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.822427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.822453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.822614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.822772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.822796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.822965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.823144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.823169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.823412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.823555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.823579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.823764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.823940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.823964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.824118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.824263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.824290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.824458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.824579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.824604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.824824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.825036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.825061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.825203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.825350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.825375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.825492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.825629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.825658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.825783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.825931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.825956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.826126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.826341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.826367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.826531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.826649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.826673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.826825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.827019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.827043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.827181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.827379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.827405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.827547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.827688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.827726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.827863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.828086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.828126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.828330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.828522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.828546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.828725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.828908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.828933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.829080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.829222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.829248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.829397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.829539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.829579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.829759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.829962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.829987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.229 qpair failed and we were unable to recover it. 00:21:01.229 [2024-04-18 13:50:03.830183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.229 [2024-04-18 13:50:03.830297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.830322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.830476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.830619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.830661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.830831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.830950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.830976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.831120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.831243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.831268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.831386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.831531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.831556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.831701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.831814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.831840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.831993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.832126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.832151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.832310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.832449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.832473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.832650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.832830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.832855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.833014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.833132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.833157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.833280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.833418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.833444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.833560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.833705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.833729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.833876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.833991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.834017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.834189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.834338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.834364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.834505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.834644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.834685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.834819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.834963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.834989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.835129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.835289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.835316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.835432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.835581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.835606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.835785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.835929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.835968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.836124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.836274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.836300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.836455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.836571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.836596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.836740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.836896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.836921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.837108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.837262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.837289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.837437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.837550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.837576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.837721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.837867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.837892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.838051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.838220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.838246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.838365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.838478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.838503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.838676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.838820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.838860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.230 qpair failed and we were unable to recover it. 00:21:01.230 [2024-04-18 13:50:03.839051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.230 [2024-04-18 13:50:03.839192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.839218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.839364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.839504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.839542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.839702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.839871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.839897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.840035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.840204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.840231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.840373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.840541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.840581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.840760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.840904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.840930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.841088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.841234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.841259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.841402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.841544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.841570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.841740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.841879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.841905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.842052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.842183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.842209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.842383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.842537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.842562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.842719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.842890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.842931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.843190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.843342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.843368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.843499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.843667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.843694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.843836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.844013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.844038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.844205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.844352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.844379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.844548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.844687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.844726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.844886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.845005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.845030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.845190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.845305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.845330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.845497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.845663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.845689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.845944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.846090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.846114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.846296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.846433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.846459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.846578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.846742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.846782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.847010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.847148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.847192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.847376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.847515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.847555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.847725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.847843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.847868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.848026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.848169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.848198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.231 [2024-04-18 13:50:03.848415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.848548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.231 [2024-04-18 13:50:03.848572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.231 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.848757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.848886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.848926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.849170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.849356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.849381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.849556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.849710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.849735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.849928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.850099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.850124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.850300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.850488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.850513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.850672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.850793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.850819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.850962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.851113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.851139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.851310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.851474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.851499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.851629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.851797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.851837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.851978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.852129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.852156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.852292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.852434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.852459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.852629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.852743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.852768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.852919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.853062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.853088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.853226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.853394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.853420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.853564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.853705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.853744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.853898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.854035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.854060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.854170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.854325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.854350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.854517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.854628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.854654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.854826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.854968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.854994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.855164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.855342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.855367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.855489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.855655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.855682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.855813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.855957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.855983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.856113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.856235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.856261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.856408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.856543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.856570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.232 [2024-04-18 13:50:03.856734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.856841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.232 [2024-04-18 13:50:03.856867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.232 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.857007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.857120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.857145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.857323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.857440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.857465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.857623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.857733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.857758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.857891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.858062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.858103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.858269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.858442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.858468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.858636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.858783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.858808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.858954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.859119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.859159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.859297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.859442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.859467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.859637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.859818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.859842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.859999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.860113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.860139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.860323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.860506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.860532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.860682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.860792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.860817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.860961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.861129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.861170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.861335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.861473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.861499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.861652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.861798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.861823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.861969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.862184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.862209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.862370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.862611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.862636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.862788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.862934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.862959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.863108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.863255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.863282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.863460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.863602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.863641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.863766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.863882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.863908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.864031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.864190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.864217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.864431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.864586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.864611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.864739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.864892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.864928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.865087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.865203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.865230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.865342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.865472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.865499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.865646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.865802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.865843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.233 qpair failed and we were unable to recover it. 00:21:01.233 [2024-04-18 13:50:03.865996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.233 [2024-04-18 13:50:03.866136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.866162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.866318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.866454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.866480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.866636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.866779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.866806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.866977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.867122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.867163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.867346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.867459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.867485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.867623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.867758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.867784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.867919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.868056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.868081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.868248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.868396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.868422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.868555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.868690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.868717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.868834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.868979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.869005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.869120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.869288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.869318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.869483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.869625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.869665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.869800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.869990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.870016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.870156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.870342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.870368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.870548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.870695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.870720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.870875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.871032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.871072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.871249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.871395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.871421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.871530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.871672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.871698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.871878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.872010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.872050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.872210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.872379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.872405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.872534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.872701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.872731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.872900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.873030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.873065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.873298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.873440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.873467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.873650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.873767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.873793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.873952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.874140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.874165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.874294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.874437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.874462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.874653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.874836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.874860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.234 qpair failed and we were unable to recover it. 00:21:01.234 [2024-04-18 13:50:03.875053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.234 [2024-04-18 13:50:03.875196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.875222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.875357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.875533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.875573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.875739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.875871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.875898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.876135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.876314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.876345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.876552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.876701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.876725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.876894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.877080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.877106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.877346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.877486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.877511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.877651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.877822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.877847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.877954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.878096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.878121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.878272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.878417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.878443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.878598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.878788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.878811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.878993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.879124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.879164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.879386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.879547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.879571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.879746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.879952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.879980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.880144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.880336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.880362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.880510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.880651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.880689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.880932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.881104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.881128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.881325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.881470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.881509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.881635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.881751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.881777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.881941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.882206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.882232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.882350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.882523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.882549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.882720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.882897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.882922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.883079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.883223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.883249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.883390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.883531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.883571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.883725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.883850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.883875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.884041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.884211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.884236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.884381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.884495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.884519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.884661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.884920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.884945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.885142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.885295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.235 [2024-04-18 13:50:03.885320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.235 qpair failed and we were unable to recover it. 00:21:01.235 [2024-04-18 13:50:03.885428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.885606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.885631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.885775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.885921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.885946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.886122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.886287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.886313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.886459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.886605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.886644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.886775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.886960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.886986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.887160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.887316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.887342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.887510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.887682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.887706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.887888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.888006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.888032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.888249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.888396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.888421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.888556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.888709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.888734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.888896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.889044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.889069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.889281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.889431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.889457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.889689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.889817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.889857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.889994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.890152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.890184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.890330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.890472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.890511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.890654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.890794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.890819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.890963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.891124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.891165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.891363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.891483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.891510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.891744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.891899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.891938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.892099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.892241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.892268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.892413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.892580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.892621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.892803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.893031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.893056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.893238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.893381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.893407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.893577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.893733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.893758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.893949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.894130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.894154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.894328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.894473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.894499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.894670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.894814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.894840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.236 [2024-04-18 13:50:03.895024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.895192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.236 [2024-04-18 13:50:03.895217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.236 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.895377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.895606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.895631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.895786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.895901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.895926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.896067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.896209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.896236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.896362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.896540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.896566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.896710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.896848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.896887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.897114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.897292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.897319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.897435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.897580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.897606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.897839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.897992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.898031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.898185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.898372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.898398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.898634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.898811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.898835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.899089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.899268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.899295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.899444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.899595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.899620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.899794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.899935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.899975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.900133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.900384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.900416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.900602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.900797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.900824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.900998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.901111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.901138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.901319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.901520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.901545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.901762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.901951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.901976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.902109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.902352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.902379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.902497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.902703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.902742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.902894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.903031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.903057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.903199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.903418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.903443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.903592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.903723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.903762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.903912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.904069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.237 [2024-04-18 13:50:03.904095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.237 qpair failed and we were unable to recover it. 00:21:01.237 [2024-04-18 13:50:03.904306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.904451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.904476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.904682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.904856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.904896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.905023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.905185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.905212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.905448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.905602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.905626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.905837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.905987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.906011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.906203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.906413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.906440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.906658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.906829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.906854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.907046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.907189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.907215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.907366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.907605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.907630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.907789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.907947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.907987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.908190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.908309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.908334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.908526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.908705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.908731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.908968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.909183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.909209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.909334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.909479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.909505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.909648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.909766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.909790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.909933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.910102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.910141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.910298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.910413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.910438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.910580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.910741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.910781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.910993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.911168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.911200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.911399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.911575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.911600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.911855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.912031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.912055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.912304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.912453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.912492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.912692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.912855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.912881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.913108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.913292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.913319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.913470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.913615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.913641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.913788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.913898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.913924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.914099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.914228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.914254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.914441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.914614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.914640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.238 qpair failed and we were unable to recover it. 00:21:01.238 [2024-04-18 13:50:03.914790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.238 [2024-04-18 13:50:03.914911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.914936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.915190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.915351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.915377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.915532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.915711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.915738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.915859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.916072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.916100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.916273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.916449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.916475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.916618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.916744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.916769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.916887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.917074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.917100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.917288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.917483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.917509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.917711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.917857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.917883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.918008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.918187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.918215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.918358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.918501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.918527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.918670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.918779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.918806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.918961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.919116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.919141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.919324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.919473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.919499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.919620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.919750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.919775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.919879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.920045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.920071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.920212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.920364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.920389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.920531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.920669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.920696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.920850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.920995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.921021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.921175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.921332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.921357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.921532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.921676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.921701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.921927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.922102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.922130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.922303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.922441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.922467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.922580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.922725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.922749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.923010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.923186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.923212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.923441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.923630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.923656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.923863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.924005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.924031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.924201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.924348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.924374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.239 qpair failed and we were unable to recover it. 00:21:01.239 [2024-04-18 13:50:03.924523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.239 [2024-04-18 13:50:03.924669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.924695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.924888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.925036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.925061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.925249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.925377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.925404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.925582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.925745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.925769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.925919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.926064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.926089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.926259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.926400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.926426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.926610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.926777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.926803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.926948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.927079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.927107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.927242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.927414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.927439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.927645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.927836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.927861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.928012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.928150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.928183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.928309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.928485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.928511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.928691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.928869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.928894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.929013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.929138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.929164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.929289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.929447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.929472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.929635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.929798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.929824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.930001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.930172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.930204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.930319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.930523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.930553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.930726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.930914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.930940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.931172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.931343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.931369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.931566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.931813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.931839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.932003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.932282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.932309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.932548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.932781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.932808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.933004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.933220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.933249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.933426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.933598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.933624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.933849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.934016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.934045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.934235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.934461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.240 [2024-04-18 13:50:03.934487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.240 qpair failed and we were unable to recover it. 00:21:01.240 [2024-04-18 13:50:03.934722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.934870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.934901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.935128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.935395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.935422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.935662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.935861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.935888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.936121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.936296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.936322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.936621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.936777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.936803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.936973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.937189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.937216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.937390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.937557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.937582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.937824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.938055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.938083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.938263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.938461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.938488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.938684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.938876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.938903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.939116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.939318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.939350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.939555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.939808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.939834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.940073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.940300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.940327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.940554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.940817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.940844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.941109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.941331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.941358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.941637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.941893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.941920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.942139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.942329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.942357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.942523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.942690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.942716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.942927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.943163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.943196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.943371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.943519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.943545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.943779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.943909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.943936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.944130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.944291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.944320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.944481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.944664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.944691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.944870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.945013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.945041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.945249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.945380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.945406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.945566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.945707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.945733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.241 qpair failed and we were unable to recover it. 00:21:01.241 [2024-04-18 13:50:03.945913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.946063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.241 [2024-04-18 13:50:03.946089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.946286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.946443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.946469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.946629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.946792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.946817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.946938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.947140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.947166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.947330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.947469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.947495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.947660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.947868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.947895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.948040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.948207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.948234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.948395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 13:50:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:01.242 [2024-04-18 13:50:03.948554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.948580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 13:50:03 -- common/autotest_common.sh@850 -- # return 0 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.948769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 13:50:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:01.242 [2024-04-18 13:50:03.948906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 13:50:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:01.242 [2024-04-18 13:50:03.948933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 13:50:03 -- common/autotest_common.sh@10 -- # set +x 00:21:01.242 [2024-04-18 13:50:03.949090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.949285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.949313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.949475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.949641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.949667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.949865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.950010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.950036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.950222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.950392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.950419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.950553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.950727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.950755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.950894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.951036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.951062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.951272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.951427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.951454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.951677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.951838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.951865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.952070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.952240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.952267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.952424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.952664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.952689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.952911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.953083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.953110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.953295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.953419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.242 [2024-04-18 13:50:03.953445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.242 qpair failed and we were unable to recover it. 00:21:01.242 [2024-04-18 13:50:03.953592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.953739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.953767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.953897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.954015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.954041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.954265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.954414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.954440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.954565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.954742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.954769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.954940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.955109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.955135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.955275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.955417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.955446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.955582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.955733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.955760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.955878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.956021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.956049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.956165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.956317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.956344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.956460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.956693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.956719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.956860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.957026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.957051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.957200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.957321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.957347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.957466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.957698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.957723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.957894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.958016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.958043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.958166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.958317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.958343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.958507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.958677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.958704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.958865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.959030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.959056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.959211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.959350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.959376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.959536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.959665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.959691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.959858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.960024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.960053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.960202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.960339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.960365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.960498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.960688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.960715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.960948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.961135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.961161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.961315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.961466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.961501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.961653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.961786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.961812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.961992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.962168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.243 [2024-04-18 13:50:03.962202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.243 qpair failed and we were unable to recover it. 00:21:01.243 [2024-04-18 13:50:03.962347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.962469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.962496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.962654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.962814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.962840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.963031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.963175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.963209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.963344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.963499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.963525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.963694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.963902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.963928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.964062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.964229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.964257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.964398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.964601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.964628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.964821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.964976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.965003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.965150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.965287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.965313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.965543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.965736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.965762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.965942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.966124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.966150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.966305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.966436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.966464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.966604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.966765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.966791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.966921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.967082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.967109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.967268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.967402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.967428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.967629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.967759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.967785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.967913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.968068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.968095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 13:50:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.244 [2024-04-18 13:50:03.968267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.968431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 13:50:03 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:01.244 [2024-04-18 13:50:03.968458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 13:50:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.244 [2024-04-18 13:50:03.968648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 13:50:03 -- common/autotest_common.sh@10 -- # set +x 00:21:01.244 [2024-04-18 13:50:03.968781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.968808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.968979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.969217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.969245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.969380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.969539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.969565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.969751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.969904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.969930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.970104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.970239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.970265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.970426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.970588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.970614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.970757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.970881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.970907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.971060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.971223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.971250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.971392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.971578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.971605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.244 qpair failed and we were unable to recover it. 00:21:01.244 [2024-04-18 13:50:03.971798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.244 [2024-04-18 13:50:03.971960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.971986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.972131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.972307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.972336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.972472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.972629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.972656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.972886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.973070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.973096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.973248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.973390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.973416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.973621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.973786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.973812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.973982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.974152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.974197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.974331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.974472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.974498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.974633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.974812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.974839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.975029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.975166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.975199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.975367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.975530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.975567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.975744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.975958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.975985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.976120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.976262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.976288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.976424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.976590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.976615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.976803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.976937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.976963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.977142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.977287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.977313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.977478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.977635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.977661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.977821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.977952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.977979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.978120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.978256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.978283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.978429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.978597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.978625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.978817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.978942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.978968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.979149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.979274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.979301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.979434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.979630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.979656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.979845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.980011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.980038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.980217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.980373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.980399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.980532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.980696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.980723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.980893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.981023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.981049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.981249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.981375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.245 [2024-04-18 13:50:03.981402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.245 qpair failed and we were unable to recover it. 00:21:01.245 [2024-04-18 13:50:03.981566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.981805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.981831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.981996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.982166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.982223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.982359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.982522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.982548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.982753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.982920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.982946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.983126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.983274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.983301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.983429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.983622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.983649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.983817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.984005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.984031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.984163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.984333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.984361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.984484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.984680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.984707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.984876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.985017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.985044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.985214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.985380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.985407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.985555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.985740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.985766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.985941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.986106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.986135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.986294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.986458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.986484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.986641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.986790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.986817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.986957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.987151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.987184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.987324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.987461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.987487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.987627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.987805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.987832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.988048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.988232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.988259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.988393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.988535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.988560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.988752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.988889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.988916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.989086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.989210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.989237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.989407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.989592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.989618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.989808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.989968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.246 [2024-04-18 13:50:03.989994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.246 qpair failed and we were unable to recover it. 00:21:01.246 [2024-04-18 13:50:03.990130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.990273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.990299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.990441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.990575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.990601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.990723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.990894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.990921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.991123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.991261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.991290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.991491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.991660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.991687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.991877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.992083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.992110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.992245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.992370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.992396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.992602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.992770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.992807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 Malloc0 00:21:01.247 [2024-04-18 13:50:03.992988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.993154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.993187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 13:50:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.247 [2024-04-18 13:50:03.993325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 13:50:03 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:01.247 [2024-04-18 13:50:03.993484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.993510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 13:50:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.247 13:50:03 -- common/autotest_common.sh@10 -- # set +x 00:21:01.247 [2024-04-18 13:50:03.993666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.993854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.993881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.994016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.994201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.994228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.994362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.994539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.994565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.994711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.994904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.994929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.995128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.995309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.995336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.995463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.995608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.995634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.995842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.996043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.996069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.996244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.996378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.996408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.996525] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.247 [2024-04-18 13:50:03.996574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.996751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.996776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.996928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.997048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.997073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.997245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.997380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.997409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.997598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.997755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.997781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.997925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.998084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.998120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.247 qpair failed and we were unable to recover it. 00:21:01.247 [2024-04-18 13:50:03.998301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.247 [2024-04-18 13:50:03.998445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:03.998472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:03.998632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:03.998777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:03.998803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:03.998979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:03.999173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:03.999206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:03.999345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:03.999491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:03.999515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:03.999677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:03.999876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:03.999902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:04.000045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.000205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.000244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:04.000388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.000531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.000557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:04.000689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.000868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.000895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:04.001030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.001162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.001199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:04.001376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.001514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.001542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:04.001678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.001855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.001881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:04.002013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.002194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.002227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:04.002406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.002616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.002655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:04.002823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.002978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.003004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:04.003169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.003364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.003391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:04.003567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.003727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.003753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:04.003889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.004033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.004062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:04.004234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.004418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.004444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 [2024-04-18 13:50:04.004632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 13:50:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.510 [2024-04-18 13:50:04.004834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 [2024-04-18 13:50:04.004860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.510 qpair failed and we were unable to recover it. 00:21:01.510 13:50:04 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:01.510 13:50:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.510 [2024-04-18 13:50:04.005040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.510 13:50:04 -- common/autotest_common.sh@10 -- # set +x 00:21:01.511 [2024-04-18 13:50:04.005193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.005230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.005397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.005622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.005655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.005819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.005948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.005974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.006156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.006367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.006394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.006607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.006850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.006877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.007041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.007194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.007233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.007384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.007582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.007608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.007786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.007940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.007966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.008113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.008279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.008306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.008467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.008686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.008712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.008887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.009018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.009044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.009203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.009324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.009349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.009508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.009672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.009698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.009823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.009948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.009974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.010126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.010271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.010297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.010430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.010578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.010604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.010776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.010929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.010955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.011074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.011198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.011229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.011360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.011530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.011557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.011756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.011917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.011943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.012113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.012258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.012284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.012451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.012659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.012685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 13:50:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.511 [2024-04-18 13:50:04.012865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 13:50:04 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:01.511 [2024-04-18 13:50:04.013054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 13:50:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.511 [2024-04-18 13:50:04.013080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 13:50:04 -- common/autotest_common.sh@10 -- # set +x 00:21:01.511 [2024-04-18 13:50:04.013208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.013375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.013401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.013588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.013746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.013772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.511 [2024-04-18 13:50:04.013939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.014121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.511 [2024-04-18 13:50:04.014147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.511 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.014303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.014495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.014520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.014699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.014844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.014884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.015039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.015233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.015259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.015413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.015582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.015606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.015768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.015924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.015965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.016134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.016284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.016311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.016449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.016648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.016673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.016885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.017077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.017102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.017265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.017423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.017448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.017623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.017839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.017870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.018070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.018255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.018281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.018421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.018621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.018646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.018791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.018996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.019020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.019192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.019363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.019390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.019574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.019753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.019777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.019954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.020140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.020165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.020339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.020536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.020562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.020738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 13:50:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.512 [2024-04-18 13:50:04.020946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.020972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 13:50:04 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.512 13:50:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.512 [2024-04-18 13:50:04.021166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 13:50:04 -- common/autotest_common.sh@10 -- # set +x 00:21:01.512 [2024-04-18 13:50:04.021315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.021341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.021524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.021728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.021754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.021938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.022085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.022110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.022270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.022412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.022450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.022630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.022811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.022836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.023028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.023151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.023182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.023346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.023552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.023578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.023771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.023975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.024000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.512 qpair failed and we were unable to recover it. 00:21:01.512 [2024-04-18 13:50:04.024222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.512 [2024-04-18 13:50:04.024397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.513 [2024-04-18 13:50:04.024423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6450000b90 with addr=10.0.0.2, port=4420 00:21:01.513 qpair failed and we were unable to recover it. 00:21:01.513 [2024-04-18 13:50:04.024596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.513 [2024-04-18 13:50:04.024797] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.513 [2024-04-18 13:50:04.027932] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:21:01.513 [2024-04-18 13:50:04.027994] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f6450000b90 (107): Transport endpoint is not connected 00:21:01.513 [2024-04-18 13:50:04.028074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.513 qpair failed and we were unable to recover it. 00:21:01.513 13:50:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.513 13:50:04 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:01.513 13:50:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.513 13:50:04 -- common/autotest_common.sh@10 -- # set +x 00:21:01.513 13:50:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.513 13:50:04 -- host/target_disconnect.sh@58 -- # wait 2683364 00:21:01.513 [2024-04-18 13:50:04.037209] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.513 [2024-04-18 13:50:04.037355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.513 [2024-04-18 13:50:04.037384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.513 [2024-04-18 13:50:04.037400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.513 [2024-04-18 13:50:04.037413] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.513 [2024-04-18 13:50:04.037444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.513 qpair failed and we were unable to recover it. 00:21:01.513 [2024-04-18 13:50:04.047192] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.513 [2024-04-18 13:50:04.047330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.513 [2024-04-18 13:50:04.047358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.513 [2024-04-18 13:50:04.047373] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.513 [2024-04-18 13:50:04.047386] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.513 [2024-04-18 13:50:04.047417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.513 qpair failed and we were unable to recover it. 00:21:01.513 [2024-04-18 13:50:04.057117] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.513 [2024-04-18 13:50:04.057271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.513 [2024-04-18 13:50:04.057299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.513 [2024-04-18 13:50:04.057314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.513 [2024-04-18 13:50:04.057327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.513 [2024-04-18 13:50:04.057357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.513 qpair failed and we were unable to recover it. 00:21:01.513 [2024-04-18 13:50:04.067117] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.513 [2024-04-18 13:50:04.067270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.513 [2024-04-18 13:50:04.067298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.513 [2024-04-18 13:50:04.067318] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.513 [2024-04-18 13:50:04.067332] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.513 [2024-04-18 13:50:04.067363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.513 qpair failed and we were unable to recover it. 00:21:01.513 [2024-04-18 13:50:04.077147] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.513 [2024-04-18 13:50:04.077286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.513 [2024-04-18 13:50:04.077312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.513 [2024-04-18 13:50:04.077326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.513 [2024-04-18 13:50:04.077339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.513 [2024-04-18 13:50:04.077369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.513 qpair failed and we were unable to recover it. 00:21:01.513 [2024-04-18 13:50:04.087174] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.513 [2024-04-18 13:50:04.087318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.513 [2024-04-18 13:50:04.087346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.513 [2024-04-18 13:50:04.087361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.513 [2024-04-18 13:50:04.087373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.513 [2024-04-18 13:50:04.087404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.513 qpair failed and we were unable to recover it. 00:21:01.513 [2024-04-18 13:50:04.097260] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.513 [2024-04-18 13:50:04.097424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.513 [2024-04-18 13:50:04.097452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.513 [2024-04-18 13:50:04.097468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.513 [2024-04-18 13:50:04.097481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.513 [2024-04-18 13:50:04.097511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.513 qpair failed and we were unable to recover it. 00:21:01.513 [2024-04-18 13:50:04.107245] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.513 [2024-04-18 13:50:04.107377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.513 [2024-04-18 13:50:04.107404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.513 [2024-04-18 13:50:04.107420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.513 [2024-04-18 13:50:04.107433] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.513 [2024-04-18 13:50:04.107463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.513 qpair failed and we were unable to recover it. 00:21:01.513 [2024-04-18 13:50:04.117230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.513 [2024-04-18 13:50:04.117353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.513 [2024-04-18 13:50:04.117380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.513 [2024-04-18 13:50:04.117396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.513 [2024-04-18 13:50:04.117408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.513 [2024-04-18 13:50:04.117439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.513 qpair failed and we were unable to recover it. 00:21:01.513 [2024-04-18 13:50:04.127289] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.513 [2024-04-18 13:50:04.127422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.513 [2024-04-18 13:50:04.127449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.513 [2024-04-18 13:50:04.127481] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.513 [2024-04-18 13:50:04.127493] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.513 [2024-04-18 13:50:04.127522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.513 qpair failed and we were unable to recover it. 00:21:01.513 [2024-04-18 13:50:04.137323] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.513 [2024-04-18 13:50:04.137454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.513 [2024-04-18 13:50:04.137480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.513 [2024-04-18 13:50:04.137496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.513 [2024-04-18 13:50:04.137508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.513 [2024-04-18 13:50:04.137538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.513 qpair failed and we were unable to recover it. 00:21:01.513 [2024-04-18 13:50:04.147318] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.513 [2024-04-18 13:50:04.147435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.513 [2024-04-18 13:50:04.147476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.514 [2024-04-18 13:50:04.147491] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.514 [2024-04-18 13:50:04.147503] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.514 [2024-04-18 13:50:04.147533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.514 qpair failed and we were unable to recover it. 00:21:01.514 [2024-04-18 13:50:04.157370] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.514 [2024-04-18 13:50:04.157500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.514 [2024-04-18 13:50:04.157556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.514 [2024-04-18 13:50:04.157571] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.514 [2024-04-18 13:50:04.157584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.514 [2024-04-18 13:50:04.157612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.514 qpair failed and we were unable to recover it. 00:21:01.514 [2024-04-18 13:50:04.167444] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.514 [2024-04-18 13:50:04.167588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.514 [2024-04-18 13:50:04.167615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.514 [2024-04-18 13:50:04.167631] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.514 [2024-04-18 13:50:04.167644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.514 [2024-04-18 13:50:04.167674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.514 qpair failed and we were unable to recover it. 00:21:01.514 [2024-04-18 13:50:04.177403] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.514 [2024-04-18 13:50:04.177524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.514 [2024-04-18 13:50:04.177550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.514 [2024-04-18 13:50:04.177565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.514 [2024-04-18 13:50:04.177578] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.514 [2024-04-18 13:50:04.177607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.514 qpair failed and we were unable to recover it. 00:21:01.514 [2024-04-18 13:50:04.187563] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.514 [2024-04-18 13:50:04.187696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.514 [2024-04-18 13:50:04.187722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.514 [2024-04-18 13:50:04.187737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.514 [2024-04-18 13:50:04.187749] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.514 [2024-04-18 13:50:04.187778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.514 qpair failed and we were unable to recover it. 00:21:01.514 [2024-04-18 13:50:04.197549] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.514 [2024-04-18 13:50:04.197663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.514 [2024-04-18 13:50:04.197689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.514 [2024-04-18 13:50:04.197704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.514 [2024-04-18 13:50:04.197716] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.514 [2024-04-18 13:50:04.197750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.514 qpair failed and we were unable to recover it. 00:21:01.514 [2024-04-18 13:50:04.207551] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.514 [2024-04-18 13:50:04.207660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.514 [2024-04-18 13:50:04.207687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.514 [2024-04-18 13:50:04.207701] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.514 [2024-04-18 13:50:04.207714] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.514 [2024-04-18 13:50:04.207742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.514 qpair failed and we were unable to recover it. 00:21:01.514 [2024-04-18 13:50:04.217598] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.514 [2024-04-18 13:50:04.217718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.514 [2024-04-18 13:50:04.217744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.514 [2024-04-18 13:50:04.217759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.514 [2024-04-18 13:50:04.217771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.514 [2024-04-18 13:50:04.217800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.514 qpair failed and we were unable to recover it. 00:21:01.514 [2024-04-18 13:50:04.227563] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.514 [2024-04-18 13:50:04.227682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.514 [2024-04-18 13:50:04.227708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.514 [2024-04-18 13:50:04.227723] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.514 [2024-04-18 13:50:04.227735] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.514 [2024-04-18 13:50:04.227764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.514 qpair failed and we were unable to recover it. 00:21:01.514 [2024-04-18 13:50:04.237604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.514 [2024-04-18 13:50:04.237718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.514 [2024-04-18 13:50:04.237743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.514 [2024-04-18 13:50:04.237758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.514 [2024-04-18 13:50:04.237770] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.514 [2024-04-18 13:50:04.237799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.514 qpair failed and we were unable to recover it. 00:21:01.514 [2024-04-18 13:50:04.247663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.514 [2024-04-18 13:50:04.247776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.514 [2024-04-18 13:50:04.247806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.514 [2024-04-18 13:50:04.247822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.514 [2024-04-18 13:50:04.247834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.514 [2024-04-18 13:50:04.247863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.514 qpair failed and we were unable to recover it. 00:21:01.514 [2024-04-18 13:50:04.257672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.514 [2024-04-18 13:50:04.257792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.514 [2024-04-18 13:50:04.257818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.514 [2024-04-18 13:50:04.257832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.514 [2024-04-18 13:50:04.257845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.514 [2024-04-18 13:50:04.257873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.514 qpair failed and we were unable to recover it. 00:21:01.514 [2024-04-18 13:50:04.267699] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.514 [2024-04-18 13:50:04.267824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.514 [2024-04-18 13:50:04.267850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.515 [2024-04-18 13:50:04.267865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.515 [2024-04-18 13:50:04.267878] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.515 [2024-04-18 13:50:04.267907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.515 qpair failed and we were unable to recover it. 00:21:01.515 [2024-04-18 13:50:04.277747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.515 [2024-04-18 13:50:04.277865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.515 [2024-04-18 13:50:04.277890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.515 [2024-04-18 13:50:04.277905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.515 [2024-04-18 13:50:04.277918] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.515 [2024-04-18 13:50:04.277947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.515 qpair failed and we were unable to recover it. 00:21:01.515 [2024-04-18 13:50:04.287763] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.515 [2024-04-18 13:50:04.287874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.515 [2024-04-18 13:50:04.287900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.515 [2024-04-18 13:50:04.287914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.515 [2024-04-18 13:50:04.287931] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.515 [2024-04-18 13:50:04.287961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.515 qpair failed and we were unable to recover it. 00:21:01.515 [2024-04-18 13:50:04.297764] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.515 [2024-04-18 13:50:04.297879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.515 [2024-04-18 13:50:04.297905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.515 [2024-04-18 13:50:04.297919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.515 [2024-04-18 13:50:04.297932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.515 [2024-04-18 13:50:04.297960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.515 qpair failed and we were unable to recover it. 00:21:01.515 [2024-04-18 13:50:04.307810] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.515 [2024-04-18 13:50:04.307943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.515 [2024-04-18 13:50:04.307969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.515 [2024-04-18 13:50:04.307983] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.515 [2024-04-18 13:50:04.307996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.515 [2024-04-18 13:50:04.308025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.515 qpair failed and we were unable to recover it. 00:21:01.774 [2024-04-18 13:50:04.317836] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.774 [2024-04-18 13:50:04.317954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.774 [2024-04-18 13:50:04.317980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.774 [2024-04-18 13:50:04.317995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.774 [2024-04-18 13:50:04.318007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.774 [2024-04-18 13:50:04.318036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.774 qpair failed and we were unable to recover it. 00:21:01.774 [2024-04-18 13:50:04.327861] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.774 [2024-04-18 13:50:04.327984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.774 [2024-04-18 13:50:04.328010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.774 [2024-04-18 13:50:04.328025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.774 [2024-04-18 13:50:04.328038] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.774 [2024-04-18 13:50:04.328067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.774 qpair failed and we were unable to recover it. 00:21:01.774 [2024-04-18 13:50:04.337894] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.774 [2024-04-18 13:50:04.338023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.774 [2024-04-18 13:50:04.338049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.774 [2024-04-18 13:50:04.338063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.774 [2024-04-18 13:50:04.338076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.774 [2024-04-18 13:50:04.338105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.774 qpair failed and we were unable to recover it. 00:21:01.774 [2024-04-18 13:50:04.347927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.774 [2024-04-18 13:50:04.348040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.774 [2024-04-18 13:50:04.348066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.774 [2024-04-18 13:50:04.348080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.774 [2024-04-18 13:50:04.348093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.774 [2024-04-18 13:50:04.348122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.774 qpair failed and we were unable to recover it. 00:21:01.774 [2024-04-18 13:50:04.357957] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.774 [2024-04-18 13:50:04.358120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.774 [2024-04-18 13:50:04.358145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.774 [2024-04-18 13:50:04.358160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.774 [2024-04-18 13:50:04.358172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.774 [2024-04-18 13:50:04.358209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.774 qpair failed and we were unable to recover it. 00:21:01.774 [2024-04-18 13:50:04.368012] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.774 [2024-04-18 13:50:04.368189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.774 [2024-04-18 13:50:04.368216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.774 [2024-04-18 13:50:04.368232] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.774 [2024-04-18 13:50:04.368244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.774 [2024-04-18 13:50:04.368275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.774 qpair failed and we were unable to recover it. 00:21:01.774 [2024-04-18 13:50:04.377990] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.774 [2024-04-18 13:50:04.378107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.774 [2024-04-18 13:50:04.378133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.774 [2024-04-18 13:50:04.378148] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.774 [2024-04-18 13:50:04.378188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.774 [2024-04-18 13:50:04.378221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.774 qpair failed and we were unable to recover it. 00:21:01.774 [2024-04-18 13:50:04.388026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.774 [2024-04-18 13:50:04.388183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.774 [2024-04-18 13:50:04.388209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.774 [2024-04-18 13:50:04.388225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.774 [2024-04-18 13:50:04.388237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.774 [2024-04-18 13:50:04.388267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.774 qpair failed and we were unable to recover it. 00:21:01.774 [2024-04-18 13:50:04.398032] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.774 [2024-04-18 13:50:04.398138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.774 [2024-04-18 13:50:04.398188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.774 [2024-04-18 13:50:04.398207] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.774 [2024-04-18 13:50:04.398220] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.774 [2024-04-18 13:50:04.398250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.774 qpair failed and we were unable to recover it. 00:21:01.774 [2024-04-18 13:50:04.408088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.774 [2024-04-18 13:50:04.408222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.774 [2024-04-18 13:50:04.408248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.774 [2024-04-18 13:50:04.408264] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.774 [2024-04-18 13:50:04.408277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.774 [2024-04-18 13:50:04.408307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.774 qpair failed and we were unable to recover it. 00:21:01.774 [2024-04-18 13:50:04.418088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.774 [2024-04-18 13:50:04.418230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.774 [2024-04-18 13:50:04.418255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.774 [2024-04-18 13:50:04.418270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.774 [2024-04-18 13:50:04.418283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.774 [2024-04-18 13:50:04.418313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.774 qpair failed and we were unable to recover it. 00:21:01.774 [2024-04-18 13:50:04.428108] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.775 [2024-04-18 13:50:04.428250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.775 [2024-04-18 13:50:04.428276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.775 [2024-04-18 13:50:04.428291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.775 [2024-04-18 13:50:04.428304] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.775 [2024-04-18 13:50:04.428335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.775 qpair failed and we were unable to recover it. 00:21:01.775 [2024-04-18 13:50:04.438136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.775 [2024-04-18 13:50:04.438274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.775 [2024-04-18 13:50:04.438301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.775 [2024-04-18 13:50:04.438316] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.775 [2024-04-18 13:50:04.438329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.775 [2024-04-18 13:50:04.438359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.775 qpair failed and we were unable to recover it. 00:21:01.775 [2024-04-18 13:50:04.448203] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.775 [2024-04-18 13:50:04.448324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.775 [2024-04-18 13:50:04.448350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.775 [2024-04-18 13:50:04.448365] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.775 [2024-04-18 13:50:04.448377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.775 [2024-04-18 13:50:04.448408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.775 qpair failed and we were unable to recover it. 00:21:01.775 [2024-04-18 13:50:04.458239] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.775 [2024-04-18 13:50:04.458372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.775 [2024-04-18 13:50:04.458398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.775 [2024-04-18 13:50:04.458414] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.775 [2024-04-18 13:50:04.458426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.775 [2024-04-18 13:50:04.458456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.775 qpair failed and we were unable to recover it. 00:21:01.775 [2024-04-18 13:50:04.468235] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.775 [2024-04-18 13:50:04.468356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.775 [2024-04-18 13:50:04.468382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.775 [2024-04-18 13:50:04.468403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.775 [2024-04-18 13:50:04.468416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.775 [2024-04-18 13:50:04.468446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.775 qpair failed and we were unable to recover it. 00:21:01.775 [2024-04-18 13:50:04.478261] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.775 [2024-04-18 13:50:04.478408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.775 [2024-04-18 13:50:04.478435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.775 [2024-04-18 13:50:04.478450] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.775 [2024-04-18 13:50:04.478463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.775 [2024-04-18 13:50:04.478506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.775 qpair failed and we were unable to recover it. 00:21:01.775 [2024-04-18 13:50:04.488282] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.775 [2024-04-18 13:50:04.488399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.775 [2024-04-18 13:50:04.488426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.775 [2024-04-18 13:50:04.488441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.775 [2024-04-18 13:50:04.488453] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.775 [2024-04-18 13:50:04.488498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.775 qpair failed and we were unable to recover it. 00:21:01.775 [2024-04-18 13:50:04.498352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.775 [2024-04-18 13:50:04.498467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.775 [2024-04-18 13:50:04.498508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.775 [2024-04-18 13:50:04.498523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.775 [2024-04-18 13:50:04.498535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.775 [2024-04-18 13:50:04.498564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.775 qpair failed and we were unable to recover it. 00:21:01.775 [2024-04-18 13:50:04.508350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.775 [2024-04-18 13:50:04.508459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.775 [2024-04-18 13:50:04.508483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.775 [2024-04-18 13:50:04.508513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.775 [2024-04-18 13:50:04.508526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.775 [2024-04-18 13:50:04.508556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.775 qpair failed and we were unable to recover it. 00:21:01.775 [2024-04-18 13:50:04.518362] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.775 [2024-04-18 13:50:04.518479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.775 [2024-04-18 13:50:04.518520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.775 [2024-04-18 13:50:04.518535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.775 [2024-04-18 13:50:04.518548] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.775 [2024-04-18 13:50:04.518577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.775 qpair failed and we were unable to recover it. 00:21:01.775 [2024-04-18 13:50:04.528411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.775 [2024-04-18 13:50:04.528533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.775 [2024-04-18 13:50:04.528559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.775 [2024-04-18 13:50:04.528574] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.775 [2024-04-18 13:50:04.528586] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.775 [2024-04-18 13:50:04.528615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.775 qpair failed and we were unable to recover it. 00:21:01.775 [2024-04-18 13:50:04.538466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.775 [2024-04-18 13:50:04.538608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.775 [2024-04-18 13:50:04.538633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.775 [2024-04-18 13:50:04.538649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.775 [2024-04-18 13:50:04.538661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.775 [2024-04-18 13:50:04.538690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.775 qpair failed and we were unable to recover it. 00:21:01.775 [2024-04-18 13:50:04.548528] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.775 [2024-04-18 13:50:04.548640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.775 [2024-04-18 13:50:04.548666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.775 [2024-04-18 13:50:04.548680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.775 [2024-04-18 13:50:04.548692] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.776 [2024-04-18 13:50:04.548721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.776 qpair failed and we were unable to recover it. 00:21:01.776 [2024-04-18 13:50:04.558488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.776 [2024-04-18 13:50:04.558620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.776 [2024-04-18 13:50:04.558650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.776 [2024-04-18 13:50:04.558666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.776 [2024-04-18 13:50:04.558678] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.776 [2024-04-18 13:50:04.558707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.776 qpair failed and we were unable to recover it. 00:21:01.776 [2024-04-18 13:50:04.568525] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.776 [2024-04-18 13:50:04.568643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.776 [2024-04-18 13:50:04.568669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.776 [2024-04-18 13:50:04.568683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.776 [2024-04-18 13:50:04.568696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.776 [2024-04-18 13:50:04.568725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.776 qpair failed and we were unable to recover it. 00:21:01.776 [2024-04-18 13:50:04.578639] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.776 [2024-04-18 13:50:04.578797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.776 [2024-04-18 13:50:04.578822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.776 [2024-04-18 13:50:04.578837] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.776 [2024-04-18 13:50:04.578861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:01.776 [2024-04-18 13:50:04.578890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.776 qpair failed and we were unable to recover it. 00:21:02.034 [2024-04-18 13:50:04.588578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.034 [2024-04-18 13:50:04.588703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.034 [2024-04-18 13:50:04.588728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.034 [2024-04-18 13:50:04.588743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.034 [2024-04-18 13:50:04.588755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.034 [2024-04-18 13:50:04.588784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.034 qpair failed and we were unable to recover it. 00:21:02.034 [2024-04-18 13:50:04.598586] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.034 [2024-04-18 13:50:04.598696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.034 [2024-04-18 13:50:04.598722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.034 [2024-04-18 13:50:04.598736] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.034 [2024-04-18 13:50:04.598748] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.034 [2024-04-18 13:50:04.598782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.034 qpair failed and we were unable to recover it. 00:21:02.034 [2024-04-18 13:50:04.608642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.034 [2024-04-18 13:50:04.608808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.034 [2024-04-18 13:50:04.608834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.034 [2024-04-18 13:50:04.608848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.034 [2024-04-18 13:50:04.608861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.034 [2024-04-18 13:50:04.608889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.034 qpair failed and we were unable to recover it. 00:21:02.034 [2024-04-18 13:50:04.618704] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.034 [2024-04-18 13:50:04.618819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.034 [2024-04-18 13:50:04.618844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.034 [2024-04-18 13:50:04.618859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.034 [2024-04-18 13:50:04.618871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.034 [2024-04-18 13:50:04.618899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.034 qpair failed and we were unable to recover it. 00:21:02.034 [2024-04-18 13:50:04.628692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.034 [2024-04-18 13:50:04.628813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.034 [2024-04-18 13:50:04.628839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.034 [2024-04-18 13:50:04.628853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.034 [2024-04-18 13:50:04.628865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.034 [2024-04-18 13:50:04.628893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.034 qpair failed and we were unable to recover it. 00:21:02.034 [2024-04-18 13:50:04.638747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.034 [2024-04-18 13:50:04.638860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.034 [2024-04-18 13:50:04.638885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.034 [2024-04-18 13:50:04.638900] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.034 [2024-04-18 13:50:04.638913] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.035 [2024-04-18 13:50:04.638941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.035 qpair failed and we were unable to recover it. 00:21:02.035 [2024-04-18 13:50:04.648764] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.035 [2024-04-18 13:50:04.648876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.035 [2024-04-18 13:50:04.648906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.035 [2024-04-18 13:50:04.648922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.035 [2024-04-18 13:50:04.648934] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.035 [2024-04-18 13:50:04.648963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.035 qpair failed and we were unable to recover it. 00:21:02.035 [2024-04-18 13:50:04.658777] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.035 [2024-04-18 13:50:04.658946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.035 [2024-04-18 13:50:04.658971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.035 [2024-04-18 13:50:04.658986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.035 [2024-04-18 13:50:04.659002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.035 [2024-04-18 13:50:04.659031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.035 qpair failed and we were unable to recover it. 00:21:02.035 [2024-04-18 13:50:04.668866] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.035 [2024-04-18 13:50:04.669001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.035 [2024-04-18 13:50:04.669027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.035 [2024-04-18 13:50:04.669041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.035 [2024-04-18 13:50:04.669054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.035 [2024-04-18 13:50:04.669095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.035 qpair failed and we were unable to recover it. 00:21:02.035 [2024-04-18 13:50:04.678815] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.035 [2024-04-18 13:50:04.678932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.035 [2024-04-18 13:50:04.678958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.035 [2024-04-18 13:50:04.678973] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.035 [2024-04-18 13:50:04.678985] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.035 [2024-04-18 13:50:04.679015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.035 qpair failed and we were unable to recover it. 00:21:02.035 [2024-04-18 13:50:04.688871] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.035 [2024-04-18 13:50:04.689009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.035 [2024-04-18 13:50:04.689034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.035 [2024-04-18 13:50:04.689050] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.035 [2024-04-18 13:50:04.689062] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.035 [2024-04-18 13:50:04.689106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.035 qpair failed and we were unable to recover it. 00:21:02.035 [2024-04-18 13:50:04.698906] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.035 [2024-04-18 13:50:04.699024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.035 [2024-04-18 13:50:04.699049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.035 [2024-04-18 13:50:04.699063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.035 [2024-04-18 13:50:04.699075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.035 [2024-04-18 13:50:04.699103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.035 qpair failed and we were unable to recover it. 00:21:02.035 [2024-04-18 13:50:04.708916] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.035 [2024-04-18 13:50:04.709041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.035 [2024-04-18 13:50:04.709067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.035 [2024-04-18 13:50:04.709081] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.035 [2024-04-18 13:50:04.709093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.035 [2024-04-18 13:50:04.709122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.035 qpair failed and we were unable to recover it. 00:21:02.035 [2024-04-18 13:50:04.718959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.035 [2024-04-18 13:50:04.719077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.035 [2024-04-18 13:50:04.719102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.035 [2024-04-18 13:50:04.719117] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.035 [2024-04-18 13:50:04.719128] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.035 [2024-04-18 13:50:04.719171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.035 qpair failed and we were unable to recover it. 00:21:02.035 [2024-04-18 13:50:04.728968] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.035 [2024-04-18 13:50:04.729082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.035 [2024-04-18 13:50:04.729108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.035 [2024-04-18 13:50:04.729122] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.035 [2024-04-18 13:50:04.729135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.035 [2024-04-18 13:50:04.729188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.035 qpair failed and we were unable to recover it. 00:21:02.035 [2024-04-18 13:50:04.739005] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.035 [2024-04-18 13:50:04.739145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.035 [2024-04-18 13:50:04.739197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.035 [2024-04-18 13:50:04.739213] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.035 [2024-04-18 13:50:04.739236] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.035 [2024-04-18 13:50:04.739266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.035 qpair failed and we were unable to recover it. 00:21:02.035 [2024-04-18 13:50:04.749029] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.035 [2024-04-18 13:50:04.749201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.035 [2024-04-18 13:50:04.749228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.035 [2024-04-18 13:50:04.749243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.035 [2024-04-18 13:50:04.749258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.035 [2024-04-18 13:50:04.749288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.035 qpair failed and we were unable to recover it. 00:21:02.035 [2024-04-18 13:50:04.759046] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.035 [2024-04-18 13:50:04.759186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.035 [2024-04-18 13:50:04.759212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.035 [2024-04-18 13:50:04.759227] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.035 [2024-04-18 13:50:04.759240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.035 [2024-04-18 13:50:04.759270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.035 qpair failed and we were unable to recover it. 00:21:02.035 [2024-04-18 13:50:04.769091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.035 [2024-04-18 13:50:04.769245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.035 [2024-04-18 13:50:04.769271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.035 [2024-04-18 13:50:04.769286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.035 [2024-04-18 13:50:04.769299] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.035 [2024-04-18 13:50:04.769341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.035 qpair failed and we were unable to recover it. 00:21:02.035 [2024-04-18 13:50:04.779149] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.036 [2024-04-18 13:50:04.779312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.036 [2024-04-18 13:50:04.779338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.036 [2024-04-18 13:50:04.779353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.036 [2024-04-18 13:50:04.779371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.036 [2024-04-18 13:50:04.779402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.036 qpair failed and we were unable to recover it. 00:21:02.036 [2024-04-18 13:50:04.789133] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.036 [2024-04-18 13:50:04.789311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.036 [2024-04-18 13:50:04.789337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.036 [2024-04-18 13:50:04.789352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.036 [2024-04-18 13:50:04.789365] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.036 [2024-04-18 13:50:04.789395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.036 qpair failed and we were unable to recover it. 00:21:02.036 [2024-04-18 13:50:04.799215] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.036 [2024-04-18 13:50:04.799374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.036 [2024-04-18 13:50:04.799400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.036 [2024-04-18 13:50:04.799424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.036 [2024-04-18 13:50:04.799437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.036 [2024-04-18 13:50:04.799467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.036 qpair failed and we were unable to recover it. 00:21:02.036 [2024-04-18 13:50:04.809241] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.036 [2024-04-18 13:50:04.809355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.036 [2024-04-18 13:50:04.809381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.036 [2024-04-18 13:50:04.809396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.036 [2024-04-18 13:50:04.809409] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.036 [2024-04-18 13:50:04.809439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.036 qpair failed and we were unable to recover it. 00:21:02.036 [2024-04-18 13:50:04.819257] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.036 [2024-04-18 13:50:04.819390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.036 [2024-04-18 13:50:04.819416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.036 [2024-04-18 13:50:04.819431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.036 [2024-04-18 13:50:04.819444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.036 [2024-04-18 13:50:04.819501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.036 qpair failed and we were unable to recover it. 00:21:02.036 [2024-04-18 13:50:04.829251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.036 [2024-04-18 13:50:04.829376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.036 [2024-04-18 13:50:04.829402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.036 [2024-04-18 13:50:04.829417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.036 [2024-04-18 13:50:04.829430] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.036 [2024-04-18 13:50:04.829460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.036 qpair failed and we were unable to recover it. 00:21:02.036 [2024-04-18 13:50:04.839346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.036 [2024-04-18 13:50:04.839490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.036 [2024-04-18 13:50:04.839516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.036 [2024-04-18 13:50:04.839531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.036 [2024-04-18 13:50:04.839543] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.036 [2024-04-18 13:50:04.839583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.036 qpair failed and we were unable to recover it. 00:21:02.294 [2024-04-18 13:50:04.849343] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.294 [2024-04-18 13:50:04.849463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.294 [2024-04-18 13:50:04.849502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.294 [2024-04-18 13:50:04.849517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.294 [2024-04-18 13:50:04.849529] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.294 [2024-04-18 13:50:04.849558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.294 qpair failed and we were unable to recover it. 00:21:02.294 [2024-04-18 13:50:04.859406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.294 [2024-04-18 13:50:04.859553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.294 [2024-04-18 13:50:04.859579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.294 [2024-04-18 13:50:04.859594] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.294 [2024-04-18 13:50:04.859607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.294 [2024-04-18 13:50:04.859645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.294 qpair failed and we were unable to recover it. 00:21:02.294 [2024-04-18 13:50:04.869389] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.294 [2024-04-18 13:50:04.869522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.294 [2024-04-18 13:50:04.869563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.294 [2024-04-18 13:50:04.869583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.294 [2024-04-18 13:50:04.869596] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.294 [2024-04-18 13:50:04.869636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.294 qpair failed and we were unable to recover it. 00:21:02.294 [2024-04-18 13:50:04.879443] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.294 [2024-04-18 13:50:04.879571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.295 [2024-04-18 13:50:04.879597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.295 [2024-04-18 13:50:04.879612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.295 [2024-04-18 13:50:04.879625] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.295 [2024-04-18 13:50:04.879653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.295 qpair failed and we were unable to recover it. 00:21:02.295 [2024-04-18 13:50:04.889486] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.295 [2024-04-18 13:50:04.889631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.295 [2024-04-18 13:50:04.889657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.295 [2024-04-18 13:50:04.889672] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.295 [2024-04-18 13:50:04.889684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.295 [2024-04-18 13:50:04.889713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.295 qpair failed and we were unable to recover it. 00:21:02.295 [2024-04-18 13:50:04.899533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.295 [2024-04-18 13:50:04.899653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.295 [2024-04-18 13:50:04.899679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.295 [2024-04-18 13:50:04.899693] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.295 [2024-04-18 13:50:04.899705] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.295 [2024-04-18 13:50:04.899734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.295 qpair failed and we were unable to recover it. 00:21:02.295 [2024-04-18 13:50:04.909479] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.295 [2024-04-18 13:50:04.909601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.295 [2024-04-18 13:50:04.909626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.295 [2024-04-18 13:50:04.909641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.295 [2024-04-18 13:50:04.909653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.295 [2024-04-18 13:50:04.909682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.295 qpair failed and we were unable to recover it. 00:21:02.295 [2024-04-18 13:50:04.919564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.295 [2024-04-18 13:50:04.919727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.295 [2024-04-18 13:50:04.919753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.295 [2024-04-18 13:50:04.919768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.295 [2024-04-18 13:50:04.919781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.295 [2024-04-18 13:50:04.919816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.295 qpair failed and we were unable to recover it. 00:21:02.295 [2024-04-18 13:50:04.929560] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.295 [2024-04-18 13:50:04.929711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.295 [2024-04-18 13:50:04.929737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.295 [2024-04-18 13:50:04.929752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.295 [2024-04-18 13:50:04.929765] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.295 [2024-04-18 13:50:04.929793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.295 qpair failed and we were unable to recover it. 00:21:02.295 [2024-04-18 13:50:04.939588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.295 [2024-04-18 13:50:04.939718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.295 [2024-04-18 13:50:04.939744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.295 [2024-04-18 13:50:04.939758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.295 [2024-04-18 13:50:04.939771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.295 [2024-04-18 13:50:04.939799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.295 qpair failed and we were unable to recover it. 00:21:02.295 [2024-04-18 13:50:04.949655] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.295 [2024-04-18 13:50:04.949777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.295 [2024-04-18 13:50:04.949802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.295 [2024-04-18 13:50:04.949816] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.295 [2024-04-18 13:50:04.949828] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.295 [2024-04-18 13:50:04.949857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.295 qpair failed and we were unable to recover it. 00:21:02.295 [2024-04-18 13:50:04.959685] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.295 [2024-04-18 13:50:04.959807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.295 [2024-04-18 13:50:04.959837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.295 [2024-04-18 13:50:04.959852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.295 [2024-04-18 13:50:04.959865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.295 [2024-04-18 13:50:04.959894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.295 qpair failed and we were unable to recover it. 00:21:02.295 [2024-04-18 13:50:04.969688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.295 [2024-04-18 13:50:04.969845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.295 [2024-04-18 13:50:04.969871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.295 [2024-04-18 13:50:04.969886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.295 [2024-04-18 13:50:04.969898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.295 [2024-04-18 13:50:04.969937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.295 qpair failed and we were unable to recover it. 00:21:02.295 [2024-04-18 13:50:04.979740] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.295 [2024-04-18 13:50:04.979854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.295 [2024-04-18 13:50:04.979878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.295 [2024-04-18 13:50:04.979893] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.295 [2024-04-18 13:50:04.979905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.295 [2024-04-18 13:50:04.979935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.295 qpair failed and we were unable to recover it. 00:21:02.295 [2024-04-18 13:50:04.989733] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.295 [2024-04-18 13:50:04.989852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.295 [2024-04-18 13:50:04.989878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.295 [2024-04-18 13:50:04.989892] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.295 [2024-04-18 13:50:04.989905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.295 [2024-04-18 13:50:04.989933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.295 qpair failed and we were unable to recover it. 00:21:02.295 [2024-04-18 13:50:04.999787] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.295 [2024-04-18 13:50:04.999900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.295 [2024-04-18 13:50:04.999923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.295 [2024-04-18 13:50:04.999938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.295 [2024-04-18 13:50:04.999950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.295 [2024-04-18 13:50:04.999984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.295 qpair failed and we were unable to recover it. 00:21:02.295 [2024-04-18 13:50:05.009789] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.296 [2024-04-18 13:50:05.009897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.296 [2024-04-18 13:50:05.009922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.296 [2024-04-18 13:50:05.009937] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.296 [2024-04-18 13:50:05.009949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.296 [2024-04-18 13:50:05.009978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.296 qpair failed and we were unable to recover it. 00:21:02.296 [2024-04-18 13:50:05.019855] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.296 [2024-04-18 13:50:05.019969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.296 [2024-04-18 13:50:05.019992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.296 [2024-04-18 13:50:05.020007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.296 [2024-04-18 13:50:05.020020] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.296 [2024-04-18 13:50:05.020049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.296 qpair failed and we were unable to recover it. 00:21:02.296 [2024-04-18 13:50:05.029853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.296 [2024-04-18 13:50:05.029966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.296 [2024-04-18 13:50:05.029992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.296 [2024-04-18 13:50:05.030007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.296 [2024-04-18 13:50:05.030019] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.296 [2024-04-18 13:50:05.030047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.296 qpair failed and we were unable to recover it. 00:21:02.296 [2024-04-18 13:50:05.039879] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.296 [2024-04-18 13:50:05.040027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.296 [2024-04-18 13:50:05.040052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.296 [2024-04-18 13:50:05.040067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.296 [2024-04-18 13:50:05.040079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.296 [2024-04-18 13:50:05.040108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.296 qpair failed and we were unable to recover it. 00:21:02.296 [2024-04-18 13:50:05.049922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.296 [2024-04-18 13:50:05.050086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.296 [2024-04-18 13:50:05.050117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.296 [2024-04-18 13:50:05.050132] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.296 [2024-04-18 13:50:05.050145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.296 [2024-04-18 13:50:05.050199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.296 qpair failed and we were unable to recover it. 00:21:02.296 [2024-04-18 13:50:05.059966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.296 [2024-04-18 13:50:05.060107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.296 [2024-04-18 13:50:05.060133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.296 [2024-04-18 13:50:05.060147] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.296 [2024-04-18 13:50:05.060182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.296 [2024-04-18 13:50:05.060215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.296 qpair failed and we were unable to recover it. 00:21:02.296 [2024-04-18 13:50:05.069988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.296 [2024-04-18 13:50:05.070105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.296 [2024-04-18 13:50:05.070131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.296 [2024-04-18 13:50:05.070145] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.296 [2024-04-18 13:50:05.070172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.296 [2024-04-18 13:50:05.070211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.296 qpair failed and we were unable to recover it. 00:21:02.296 [2024-04-18 13:50:05.079960] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.296 [2024-04-18 13:50:05.080092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.296 [2024-04-18 13:50:05.080117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.296 [2024-04-18 13:50:05.080132] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.296 [2024-04-18 13:50:05.080144] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.296 [2024-04-18 13:50:05.080197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.296 qpair failed and we were unable to recover it. 00:21:02.296 [2024-04-18 13:50:05.090028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.296 [2024-04-18 13:50:05.090192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.296 [2024-04-18 13:50:05.090220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.296 [2024-04-18 13:50:05.090235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.296 [2024-04-18 13:50:05.090248] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.296 [2024-04-18 13:50:05.090283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.296 qpair failed and we were unable to recover it. 00:21:02.296 [2024-04-18 13:50:05.100119] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.296 [2024-04-18 13:50:05.100264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.296 [2024-04-18 13:50:05.100290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.296 [2024-04-18 13:50:05.100306] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.296 [2024-04-18 13:50:05.100319] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.296 [2024-04-18 13:50:05.100349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.296 qpair failed and we were unable to recover it. 00:21:02.554 [2024-04-18 13:50:05.110094] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.554 [2024-04-18 13:50:05.110229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.554 [2024-04-18 13:50:05.110256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.554 [2024-04-18 13:50:05.110271] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.554 [2024-04-18 13:50:05.110284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.554 [2024-04-18 13:50:05.110314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.554 qpair failed and we were unable to recover it. 00:21:02.554 [2024-04-18 13:50:05.120114] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.554 [2024-04-18 13:50:05.120253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.554 [2024-04-18 13:50:05.120280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.554 [2024-04-18 13:50:05.120296] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.554 [2024-04-18 13:50:05.120308] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.554 [2024-04-18 13:50:05.120338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.554 qpair failed and we were unable to recover it. 00:21:02.554 [2024-04-18 13:50:05.130121] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.554 [2024-04-18 13:50:05.130275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.554 [2024-04-18 13:50:05.130302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.554 [2024-04-18 13:50:05.130317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.554 [2024-04-18 13:50:05.130330] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.554 [2024-04-18 13:50:05.130361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.554 qpair failed and we were unable to recover it. 00:21:02.554 [2024-04-18 13:50:05.140162] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.554 [2024-04-18 13:50:05.140313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.554 [2024-04-18 13:50:05.140345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.554 [2024-04-18 13:50:05.140361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.554 [2024-04-18 13:50:05.140374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.554 [2024-04-18 13:50:05.140404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.554 qpair failed and we were unable to recover it. 00:21:02.554 [2024-04-18 13:50:05.150224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.554 [2024-04-18 13:50:05.150363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.554 [2024-04-18 13:50:05.150388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.554 [2024-04-18 13:50:05.150403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.554 [2024-04-18 13:50:05.150416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.554 [2024-04-18 13:50:05.150445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.554 qpair failed and we were unable to recover it. 00:21:02.554 [2024-04-18 13:50:05.160201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.554 [2024-04-18 13:50:05.160318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.554 [2024-04-18 13:50:05.160345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.554 [2024-04-18 13:50:05.160360] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.554 [2024-04-18 13:50:05.160373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.554 [2024-04-18 13:50:05.160402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.554 qpair failed and we were unable to recover it. 00:21:02.554 [2024-04-18 13:50:05.170238] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.554 [2024-04-18 13:50:05.170368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.554 [2024-04-18 13:50:05.170392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.554 [2024-04-18 13:50:05.170406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.554 [2024-04-18 13:50:05.170420] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.554 [2024-04-18 13:50:05.170450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.554 qpair failed and we were unable to recover it. 00:21:02.554 [2024-04-18 13:50:05.180327] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.554 [2024-04-18 13:50:05.180461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.554 [2024-04-18 13:50:05.180502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.554 [2024-04-18 13:50:05.180517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.554 [2024-04-18 13:50:05.180536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.554 [2024-04-18 13:50:05.180568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.554 qpair failed and we were unable to recover it. 00:21:02.554 [2024-04-18 13:50:05.190357] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.554 [2024-04-18 13:50:05.190475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.554 [2024-04-18 13:50:05.190516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.554 [2024-04-18 13:50:05.190531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.554 [2024-04-18 13:50:05.190543] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.554 [2024-04-18 13:50:05.190572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.554 qpair failed and we were unable to recover it. 00:21:02.554 [2024-04-18 13:50:05.200313] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.554 [2024-04-18 13:50:05.200424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.554 [2024-04-18 13:50:05.200451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.554 [2024-04-18 13:50:05.200466] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.554 [2024-04-18 13:50:05.200494] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.554 [2024-04-18 13:50:05.200523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.554 qpair failed and we were unable to recover it. 00:21:02.554 [2024-04-18 13:50:05.210337] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.554 [2024-04-18 13:50:05.210450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.554 [2024-04-18 13:50:05.210490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.554 [2024-04-18 13:50:05.210505] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.554 [2024-04-18 13:50:05.210518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.554 [2024-04-18 13:50:05.210548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.554 qpair failed and we were unable to recover it. 00:21:02.554 [2024-04-18 13:50:05.220383] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.554 [2024-04-18 13:50:05.220518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.554 [2024-04-18 13:50:05.220544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.554 [2024-04-18 13:50:05.220559] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.554 [2024-04-18 13:50:05.220571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.554 [2024-04-18 13:50:05.220600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.554 qpair failed and we were unable to recover it. 00:21:02.554 [2024-04-18 13:50:05.230422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.554 [2024-04-18 13:50:05.230583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.554 [2024-04-18 13:50:05.230609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.554 [2024-04-18 13:50:05.230623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.555 [2024-04-18 13:50:05.230636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.555 [2024-04-18 13:50:05.230665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.555 qpair failed and we were unable to recover it. 00:21:02.555 [2024-04-18 13:50:05.240422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.555 [2024-04-18 13:50:05.240534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.555 [2024-04-18 13:50:05.240573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.555 [2024-04-18 13:50:05.240589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.555 [2024-04-18 13:50:05.240602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.555 [2024-04-18 13:50:05.240630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.555 qpair failed and we were unable to recover it. 00:21:02.555 [2024-04-18 13:50:05.250493] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.555 [2024-04-18 13:50:05.250643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.555 [2024-04-18 13:50:05.250668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.555 [2024-04-18 13:50:05.250683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.555 [2024-04-18 13:50:05.250695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.555 [2024-04-18 13:50:05.250724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.555 qpair failed and we were unable to recover it. 00:21:02.555 [2024-04-18 13:50:05.260536] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.555 [2024-04-18 13:50:05.260663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.555 [2024-04-18 13:50:05.260688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.555 [2024-04-18 13:50:05.260702] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.555 [2024-04-18 13:50:05.260715] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.555 [2024-04-18 13:50:05.260744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.555 qpair failed and we were unable to recover it. 00:21:02.555 [2024-04-18 13:50:05.270522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.555 [2024-04-18 13:50:05.270658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.555 [2024-04-18 13:50:05.270684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.555 [2024-04-18 13:50:05.270706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.555 [2024-04-18 13:50:05.270720] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.555 [2024-04-18 13:50:05.270748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.555 qpair failed and we were unable to recover it. 00:21:02.555 [2024-04-18 13:50:05.280587] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.555 [2024-04-18 13:50:05.280714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.555 [2024-04-18 13:50:05.280739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.555 [2024-04-18 13:50:05.280754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.555 [2024-04-18 13:50:05.280767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.555 [2024-04-18 13:50:05.280796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.555 qpair failed and we were unable to recover it. 00:21:02.555 [2024-04-18 13:50:05.290590] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.555 [2024-04-18 13:50:05.290706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.555 [2024-04-18 13:50:05.290732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.555 [2024-04-18 13:50:05.290747] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.555 [2024-04-18 13:50:05.290759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.555 [2024-04-18 13:50:05.290788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.555 qpair failed and we were unable to recover it. 00:21:02.555 [2024-04-18 13:50:05.300626] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.555 [2024-04-18 13:50:05.300740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.555 [2024-04-18 13:50:05.300764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.555 [2024-04-18 13:50:05.300779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.555 [2024-04-18 13:50:05.300791] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.555 [2024-04-18 13:50:05.300819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.555 qpair failed and we were unable to recover it. 00:21:02.555 [2024-04-18 13:50:05.310702] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.555 [2024-04-18 13:50:05.310811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.555 [2024-04-18 13:50:05.310834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.555 [2024-04-18 13:50:05.310848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.555 [2024-04-18 13:50:05.310861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.555 [2024-04-18 13:50:05.310891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.555 qpair failed and we were unable to recover it. 00:21:02.555 [2024-04-18 13:50:05.320731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.555 [2024-04-18 13:50:05.320842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.555 [2024-04-18 13:50:05.320866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.555 [2024-04-18 13:50:05.320881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.555 [2024-04-18 13:50:05.320893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.555 [2024-04-18 13:50:05.320931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.555 qpair failed and we were unable to recover it. 00:21:02.555 [2024-04-18 13:50:05.330791] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.555 [2024-04-18 13:50:05.330914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.555 [2024-04-18 13:50:05.330940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.555 [2024-04-18 13:50:05.330954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.555 [2024-04-18 13:50:05.330967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.555 [2024-04-18 13:50:05.330996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.555 qpair failed and we were unable to recover it. 00:21:02.555 [2024-04-18 13:50:05.340774] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.555 [2024-04-18 13:50:05.340910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.555 [2024-04-18 13:50:05.340935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.555 [2024-04-18 13:50:05.340950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.555 [2024-04-18 13:50:05.340963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.555 [2024-04-18 13:50:05.340992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.555 qpair failed and we were unable to recover it. 00:21:02.555 [2024-04-18 13:50:05.350770] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.555 [2024-04-18 13:50:05.350927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.555 [2024-04-18 13:50:05.350953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.555 [2024-04-18 13:50:05.350967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.555 [2024-04-18 13:50:05.350980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.555 [2024-04-18 13:50:05.351010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.555 qpair failed and we were unable to recover it. 00:21:02.814 [2024-04-18 13:50:05.360802] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.814 [2024-04-18 13:50:05.360925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.814 [2024-04-18 13:50:05.360951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.814 [2024-04-18 13:50:05.360971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.814 [2024-04-18 13:50:05.360984] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.814 [2024-04-18 13:50:05.361014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.814 qpair failed and we were unable to recover it. 00:21:02.814 [2024-04-18 13:50:05.370837] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.814 [2024-04-18 13:50:05.370979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.814 [2024-04-18 13:50:05.371003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.814 [2024-04-18 13:50:05.371018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.814 [2024-04-18 13:50:05.371030] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.814 [2024-04-18 13:50:05.371059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.814 qpair failed and we were unable to recover it. 00:21:02.814 [2024-04-18 13:50:05.380877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.814 [2024-04-18 13:50:05.381012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.814 [2024-04-18 13:50:05.381036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.814 [2024-04-18 13:50:05.381051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.814 [2024-04-18 13:50:05.381063] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.814 [2024-04-18 13:50:05.381092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.814 qpair failed and we were unable to recover it. 00:21:02.814 [2024-04-18 13:50:05.390875] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.814 [2024-04-18 13:50:05.390989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.814 [2024-04-18 13:50:05.391014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.814 [2024-04-18 13:50:05.391028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.814 [2024-04-18 13:50:05.391040] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.814 [2024-04-18 13:50:05.391069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.814 qpair failed and we were unable to recover it. 00:21:02.814 [2024-04-18 13:50:05.400924] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.814 [2024-04-18 13:50:05.401043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.814 [2024-04-18 13:50:05.401067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.814 [2024-04-18 13:50:05.401081] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.814 [2024-04-18 13:50:05.401093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.814 [2024-04-18 13:50:05.401122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.814 qpair failed and we were unable to recover it. 00:21:02.814 [2024-04-18 13:50:05.410906] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.814 [2024-04-18 13:50:05.411016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.814 [2024-04-18 13:50:05.411040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.814 [2024-04-18 13:50:05.411054] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.814 [2024-04-18 13:50:05.411067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.814 [2024-04-18 13:50:05.411097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.814 qpair failed and we were unable to recover it. 00:21:02.814 [2024-04-18 13:50:05.420976] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.814 [2024-04-18 13:50:05.421090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.814 [2024-04-18 13:50:05.421130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.814 [2024-04-18 13:50:05.421146] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.814 [2024-04-18 13:50:05.421159] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.814 [2024-04-18 13:50:05.421195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.814 qpair failed and we were unable to recover it. 00:21:02.814 [2024-04-18 13:50:05.431007] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.814 [2024-04-18 13:50:05.431148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.814 [2024-04-18 13:50:05.431208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.814 [2024-04-18 13:50:05.431226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.814 [2024-04-18 13:50:05.431239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.814 [2024-04-18 13:50:05.431270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.814 qpair failed and we were unable to recover it. 00:21:02.815 [2024-04-18 13:50:05.440984] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.815 [2024-04-18 13:50:05.441097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.815 [2024-04-18 13:50:05.441121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.815 [2024-04-18 13:50:05.441135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.815 [2024-04-18 13:50:05.441148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.815 [2024-04-18 13:50:05.441199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.815 qpair failed and we were unable to recover it. 00:21:02.815 [2024-04-18 13:50:05.451058] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.815 [2024-04-18 13:50:05.451208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.815 [2024-04-18 13:50:05.451238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.815 [2024-04-18 13:50:05.451254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.815 [2024-04-18 13:50:05.451267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.815 [2024-04-18 13:50:05.451297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.815 qpair failed and we were unable to recover it. 00:21:02.815 [2024-04-18 13:50:05.461093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.815 [2024-04-18 13:50:05.461293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.815 [2024-04-18 13:50:05.461319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.815 [2024-04-18 13:50:05.461334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.815 [2024-04-18 13:50:05.461347] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.815 [2024-04-18 13:50:05.461378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.815 qpair failed and we were unable to recover it. 00:21:02.815 [2024-04-18 13:50:05.471109] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.815 [2024-04-18 13:50:05.471255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.815 [2024-04-18 13:50:05.471281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.815 [2024-04-18 13:50:05.471297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.815 [2024-04-18 13:50:05.471311] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.815 [2024-04-18 13:50:05.471342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.815 qpair failed and we were unable to recover it. 00:21:02.815 [2024-04-18 13:50:05.481204] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.815 [2024-04-18 13:50:05.481355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.815 [2024-04-18 13:50:05.481381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.815 [2024-04-18 13:50:05.481396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.815 [2024-04-18 13:50:05.481409] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.815 [2024-04-18 13:50:05.481439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.815 qpair failed and we were unable to recover it. 00:21:02.815 [2024-04-18 13:50:05.491210] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.815 [2024-04-18 13:50:05.491366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.815 [2024-04-18 13:50:05.491390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.815 [2024-04-18 13:50:05.491406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.815 [2024-04-18 13:50:05.491419] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.815 [2024-04-18 13:50:05.491454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.815 qpair failed and we were unable to recover it. 00:21:02.815 [2024-04-18 13:50:05.501246] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.815 [2024-04-18 13:50:05.501370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.815 [2024-04-18 13:50:05.501396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.815 [2024-04-18 13:50:05.501412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.815 [2024-04-18 13:50:05.501425] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.815 [2024-04-18 13:50:05.501454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.815 qpair failed and we were unable to recover it. 00:21:02.815 [2024-04-18 13:50:05.511238] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.815 [2024-04-18 13:50:05.511381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.815 [2024-04-18 13:50:05.511407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.815 [2024-04-18 13:50:05.511422] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.815 [2024-04-18 13:50:05.511434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.815 [2024-04-18 13:50:05.511465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.815 qpair failed and we were unable to recover it. 00:21:02.815 [2024-04-18 13:50:05.521282] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.815 [2024-04-18 13:50:05.521417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.815 [2024-04-18 13:50:05.521443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.815 [2024-04-18 13:50:05.521458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.815 [2024-04-18 13:50:05.521470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.815 [2024-04-18 13:50:05.521515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.815 qpair failed and we were unable to recover it. 00:21:02.815 [2024-04-18 13:50:05.531304] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.815 [2024-04-18 13:50:05.531484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.815 [2024-04-18 13:50:05.531509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.815 [2024-04-18 13:50:05.531523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.815 [2024-04-18 13:50:05.531536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.815 [2024-04-18 13:50:05.531566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.815 qpair failed and we were unable to recover it. 00:21:02.815 [2024-04-18 13:50:05.541347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.815 [2024-04-18 13:50:05.541484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.815 [2024-04-18 13:50:05.541513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.815 [2024-04-18 13:50:05.541528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.815 [2024-04-18 13:50:05.541541] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.815 [2024-04-18 13:50:05.541571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.815 qpair failed and we were unable to recover it. 00:21:02.815 [2024-04-18 13:50:05.551346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.815 [2024-04-18 13:50:05.551516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.815 [2024-04-18 13:50:05.551540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.815 [2024-04-18 13:50:05.551555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.815 [2024-04-18 13:50:05.551568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.815 [2024-04-18 13:50:05.551596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.815 qpair failed and we were unable to recover it. 00:21:02.815 [2024-04-18 13:50:05.561375] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.815 [2024-04-18 13:50:05.561520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.815 [2024-04-18 13:50:05.561545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.815 [2024-04-18 13:50:05.561559] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.815 [2024-04-18 13:50:05.561572] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.815 [2024-04-18 13:50:05.561600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.815 qpair failed and we were unable to recover it. 00:21:02.815 [2024-04-18 13:50:05.571431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.816 [2024-04-18 13:50:05.571564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.816 [2024-04-18 13:50:05.571588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.816 [2024-04-18 13:50:05.571602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.816 [2024-04-18 13:50:05.571616] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.816 [2024-04-18 13:50:05.571645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.816 qpair failed and we were unable to recover it. 00:21:02.816 [2024-04-18 13:50:05.581502] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.816 [2024-04-18 13:50:05.581616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.816 [2024-04-18 13:50:05.581640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.816 [2024-04-18 13:50:05.581655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.816 [2024-04-18 13:50:05.581673] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.816 [2024-04-18 13:50:05.581702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.816 qpair failed and we were unable to recover it. 00:21:02.816 [2024-04-18 13:50:05.591514] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.816 [2024-04-18 13:50:05.591665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.816 [2024-04-18 13:50:05.591690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.816 [2024-04-18 13:50:05.591704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.816 [2024-04-18 13:50:05.591716] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.816 [2024-04-18 13:50:05.591745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.816 qpair failed and we were unable to recover it. 00:21:02.816 [2024-04-18 13:50:05.601546] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.816 [2024-04-18 13:50:05.601659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.816 [2024-04-18 13:50:05.601683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.816 [2024-04-18 13:50:05.601698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.816 [2024-04-18 13:50:05.601711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.816 [2024-04-18 13:50:05.601739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.816 qpair failed and we were unable to recover it. 00:21:02.816 [2024-04-18 13:50:05.611520] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.816 [2024-04-18 13:50:05.611669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.816 [2024-04-18 13:50:05.611693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.816 [2024-04-18 13:50:05.611708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.816 [2024-04-18 13:50:05.611720] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:02.816 [2024-04-18 13:50:05.611750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.816 qpair failed and we were unable to recover it. 00:21:03.074 [2024-04-18 13:50:05.621571] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.074 [2024-04-18 13:50:05.621765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.074 [2024-04-18 13:50:05.621790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.074 [2024-04-18 13:50:05.621804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.074 [2024-04-18 13:50:05.621817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.074 [2024-04-18 13:50:05.621846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.074 qpair failed and we were unable to recover it. 00:21:03.074 [2024-04-18 13:50:05.631573] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.074 [2024-04-18 13:50:05.631701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.074 [2024-04-18 13:50:05.631726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.074 [2024-04-18 13:50:05.631742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.074 [2024-04-18 13:50:05.631754] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.074 [2024-04-18 13:50:05.631783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.074 qpair failed and we were unable to recover it. 00:21:03.074 [2024-04-18 13:50:05.641595] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.074 [2024-04-18 13:50:05.641715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.074 [2024-04-18 13:50:05.641740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.074 [2024-04-18 13:50:05.641754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.074 [2024-04-18 13:50:05.641768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.075 [2024-04-18 13:50:05.641796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.075 qpair failed and we were unable to recover it. 00:21:03.075 [2024-04-18 13:50:05.651591] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.075 [2024-04-18 13:50:05.651704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.075 [2024-04-18 13:50:05.651731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.075 [2024-04-18 13:50:05.651746] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.075 [2024-04-18 13:50:05.651758] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.075 [2024-04-18 13:50:05.651786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.075 qpair failed and we were unable to recover it. 00:21:03.075 [2024-04-18 13:50:05.661677] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.075 [2024-04-18 13:50:05.661835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.075 [2024-04-18 13:50:05.661861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.075 [2024-04-18 13:50:05.661876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.075 [2024-04-18 13:50:05.661888] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.075 [2024-04-18 13:50:05.661917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.075 qpair failed and we were unable to recover it. 00:21:03.075 [2024-04-18 13:50:05.671732] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.075 [2024-04-18 13:50:05.671864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.075 [2024-04-18 13:50:05.671890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.075 [2024-04-18 13:50:05.671911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.075 [2024-04-18 13:50:05.671924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.075 [2024-04-18 13:50:05.671954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.075 qpair failed and we were unable to recover it. 00:21:03.075 [2024-04-18 13:50:05.681717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.075 [2024-04-18 13:50:05.681827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.075 [2024-04-18 13:50:05.681852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.075 [2024-04-18 13:50:05.681867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.075 [2024-04-18 13:50:05.681880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.075 [2024-04-18 13:50:05.681909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.075 qpair failed and we were unable to recover it. 00:21:03.075 [2024-04-18 13:50:05.691715] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.075 [2024-04-18 13:50:05.691849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.075 [2024-04-18 13:50:05.691874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.075 [2024-04-18 13:50:05.691889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.075 [2024-04-18 13:50:05.691906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.075 [2024-04-18 13:50:05.691935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.075 qpair failed and we were unable to recover it. 00:21:03.075 [2024-04-18 13:50:05.701839] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.075 [2024-04-18 13:50:05.701994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.075 [2024-04-18 13:50:05.702018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.075 [2024-04-18 13:50:05.702033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.075 [2024-04-18 13:50:05.702045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.075 [2024-04-18 13:50:05.702074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.075 qpair failed and we were unable to recover it. 00:21:03.075 [2024-04-18 13:50:05.711848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.075 [2024-04-18 13:50:05.711963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.075 [2024-04-18 13:50:05.711988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.075 [2024-04-18 13:50:05.712002] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.075 [2024-04-18 13:50:05.712014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.075 [2024-04-18 13:50:05.712042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.075 qpair failed and we were unable to recover it. 00:21:03.075 [2024-04-18 13:50:05.721778] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.075 [2024-04-18 13:50:05.721910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.075 [2024-04-18 13:50:05.721936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.075 [2024-04-18 13:50:05.721951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.075 [2024-04-18 13:50:05.721963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.075 [2024-04-18 13:50:05.721992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.075 qpair failed and we were unable to recover it. 00:21:03.075 [2024-04-18 13:50:05.731852] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.075 [2024-04-18 13:50:05.731970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.075 [2024-04-18 13:50:05.731995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.075 [2024-04-18 13:50:05.732011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.075 [2024-04-18 13:50:05.732023] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.075 [2024-04-18 13:50:05.732052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.075 qpair failed and we were unable to recover it. 00:21:03.075 [2024-04-18 13:50:05.741862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.075 [2024-04-18 13:50:05.741982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.075 [2024-04-18 13:50:05.742008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.075 [2024-04-18 13:50:05.742023] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.075 [2024-04-18 13:50:05.742036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.075 [2024-04-18 13:50:05.742065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.075 qpair failed and we were unable to recover it. 00:21:03.075 [2024-04-18 13:50:05.751867] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.075 [2024-04-18 13:50:05.751984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.075 [2024-04-18 13:50:05.752009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.075 [2024-04-18 13:50:05.752024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.075 [2024-04-18 13:50:05.752036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.075 [2024-04-18 13:50:05.752065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.075 qpair failed and we were unable to recover it. 00:21:03.075 [2024-04-18 13:50:05.761891] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.075 [2024-04-18 13:50:05.762002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.075 [2024-04-18 13:50:05.762027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.075 [2024-04-18 13:50:05.762047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.075 [2024-04-18 13:50:05.762060] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.075 [2024-04-18 13:50:05.762089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.075 qpair failed and we were unable to recover it. 00:21:03.075 [2024-04-18 13:50:05.771952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.075 [2024-04-18 13:50:05.772063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.075 [2024-04-18 13:50:05.772087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.075 [2024-04-18 13:50:05.772102] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.075 [2024-04-18 13:50:05.772114] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.076 [2024-04-18 13:50:05.772143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.076 qpair failed and we were unable to recover it. 00:21:03.076 [2024-04-18 13:50:05.781985] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.076 [2024-04-18 13:50:05.782099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.076 [2024-04-18 13:50:05.782124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.076 [2024-04-18 13:50:05.782139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.076 [2024-04-18 13:50:05.782151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.076 [2024-04-18 13:50:05.782205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.076 qpair failed and we were unable to recover it. 00:21:03.076 [2024-04-18 13:50:05.792044] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.076 [2024-04-18 13:50:05.792162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.076 [2024-04-18 13:50:05.792212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.076 [2024-04-18 13:50:05.792228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.076 [2024-04-18 13:50:05.792241] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.076 [2024-04-18 13:50:05.792272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.076 qpair failed and we were unable to recover it. 00:21:03.076 [2024-04-18 13:50:05.802041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.076 [2024-04-18 13:50:05.802152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.076 [2024-04-18 13:50:05.802200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.076 [2024-04-18 13:50:05.802217] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.076 [2024-04-18 13:50:05.802230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.076 [2024-04-18 13:50:05.802259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.076 qpair failed and we were unable to recover it. 00:21:03.076 [2024-04-18 13:50:05.812073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.076 [2024-04-18 13:50:05.812217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.076 [2024-04-18 13:50:05.812243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.076 [2024-04-18 13:50:05.812258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.076 [2024-04-18 13:50:05.812270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.076 [2024-04-18 13:50:05.812300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.076 qpair failed and we were unable to recover it. 00:21:03.076 [2024-04-18 13:50:05.822116] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.076 [2024-04-18 13:50:05.822257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.076 [2024-04-18 13:50:05.822282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.076 [2024-04-18 13:50:05.822297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.076 [2024-04-18 13:50:05.822310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.076 [2024-04-18 13:50:05.822339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.076 qpair failed and we were unable to recover it. 00:21:03.076 [2024-04-18 13:50:05.832212] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.076 [2024-04-18 13:50:05.832362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.076 [2024-04-18 13:50:05.832388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.076 [2024-04-18 13:50:05.832404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.076 [2024-04-18 13:50:05.832417] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.076 [2024-04-18 13:50:05.832472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.076 qpair failed and we were unable to recover it. 00:21:03.076 [2024-04-18 13:50:05.842219] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.076 [2024-04-18 13:50:05.842381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.076 [2024-04-18 13:50:05.842406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.076 [2024-04-18 13:50:05.842421] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.076 [2024-04-18 13:50:05.842434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.076 [2024-04-18 13:50:05.842478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.076 qpair failed and we were unable to recover it. 00:21:03.076 [2024-04-18 13:50:05.852220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.076 [2024-04-18 13:50:05.852348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.076 [2024-04-18 13:50:05.852380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.076 [2024-04-18 13:50:05.852396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.076 [2024-04-18 13:50:05.852409] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.076 [2024-04-18 13:50:05.852439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.076 qpair failed and we were unable to recover it. 00:21:03.076 [2024-04-18 13:50:05.862263] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.076 [2024-04-18 13:50:05.862386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.076 [2024-04-18 13:50:05.862411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.076 [2024-04-18 13:50:05.862426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.076 [2024-04-18 13:50:05.862439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.076 [2024-04-18 13:50:05.862483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.076 qpair failed and we were unable to recover it. 00:21:03.076 [2024-04-18 13:50:05.872260] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.076 [2024-04-18 13:50:05.872380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.076 [2024-04-18 13:50:05.872406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.076 [2024-04-18 13:50:05.872421] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.076 [2024-04-18 13:50:05.872434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.076 [2024-04-18 13:50:05.872480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.076 qpair failed and we were unable to recover it. 00:21:03.334 [2024-04-18 13:50:05.882292] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.334 [2024-04-18 13:50:05.882419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.335 [2024-04-18 13:50:05.882444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.335 [2024-04-18 13:50:05.882458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.335 [2024-04-18 13:50:05.882486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.335 [2024-04-18 13:50:05.882515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.335 qpair failed and we were unable to recover it. 00:21:03.335 [2024-04-18 13:50:05.892345] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.335 [2024-04-18 13:50:05.892481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.335 [2024-04-18 13:50:05.892508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.335 [2024-04-18 13:50:05.892523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.335 [2024-04-18 13:50:05.892541] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.335 [2024-04-18 13:50:05.892576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.335 qpair failed and we were unable to recover it. 00:21:03.335 [2024-04-18 13:50:05.902410] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.335 [2024-04-18 13:50:05.902550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.335 [2024-04-18 13:50:05.902574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.335 [2024-04-18 13:50:05.902588] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.335 [2024-04-18 13:50:05.902616] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.335 [2024-04-18 13:50:05.902646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.335 qpair failed and we were unable to recover it. 00:21:03.335 [2024-04-18 13:50:05.912395] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.335 [2024-04-18 13:50:05.912525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.335 [2024-04-18 13:50:05.912549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.335 [2024-04-18 13:50:05.912563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.335 [2024-04-18 13:50:05.912576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.335 [2024-04-18 13:50:05.912604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.335 qpair failed and we were unable to recover it. 00:21:03.335 [2024-04-18 13:50:05.922412] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.335 [2024-04-18 13:50:05.922583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.335 [2024-04-18 13:50:05.922608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.335 [2024-04-18 13:50:05.922623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.335 [2024-04-18 13:50:05.922635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.335 [2024-04-18 13:50:05.922666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.335 qpair failed and we were unable to recover it. 00:21:03.335 [2024-04-18 13:50:05.932446] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.335 [2024-04-18 13:50:05.932563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.335 [2024-04-18 13:50:05.932588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.335 [2024-04-18 13:50:05.932604] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.335 [2024-04-18 13:50:05.932617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.335 [2024-04-18 13:50:05.932647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.335 qpair failed and we were unable to recover it. 00:21:03.335 [2024-04-18 13:50:05.942597] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.335 [2024-04-18 13:50:05.942726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.335 [2024-04-18 13:50:05.942758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.335 [2024-04-18 13:50:05.942774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.335 [2024-04-18 13:50:05.942786] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.335 [2024-04-18 13:50:05.942816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.335 qpair failed and we were unable to recover it. 00:21:03.335 [2024-04-18 13:50:05.952508] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.335 [2024-04-18 13:50:05.952663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.335 [2024-04-18 13:50:05.952706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.335 [2024-04-18 13:50:05.952722] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.335 [2024-04-18 13:50:05.952734] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.335 [2024-04-18 13:50:05.952768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.335 qpair failed and we were unable to recover it. 00:21:03.335 [2024-04-18 13:50:05.962611] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.335 [2024-04-18 13:50:05.962737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.335 [2024-04-18 13:50:05.962763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.335 [2024-04-18 13:50:05.962778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.335 [2024-04-18 13:50:05.962790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.335 [2024-04-18 13:50:05.962820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.335 qpair failed and we were unable to recover it. 00:21:03.335 [2024-04-18 13:50:05.972548] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.335 [2024-04-18 13:50:05.972673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.335 [2024-04-18 13:50:05.972699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.335 [2024-04-18 13:50:05.972713] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.335 [2024-04-18 13:50:05.972727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.335 [2024-04-18 13:50:05.972756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.335 qpair failed and we were unable to recover it. 00:21:03.335 [2024-04-18 13:50:05.982596] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.335 [2024-04-18 13:50:05.982712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.335 [2024-04-18 13:50:05.982735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.335 [2024-04-18 13:50:05.982750] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.335 [2024-04-18 13:50:05.982767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.335 [2024-04-18 13:50:05.982797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.335 qpair failed and we were unable to recover it. 00:21:03.335 [2024-04-18 13:50:05.992638] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.335 [2024-04-18 13:50:05.992752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.335 [2024-04-18 13:50:05.992778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.335 [2024-04-18 13:50:05.992792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.335 [2024-04-18 13:50:05.992804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.335 [2024-04-18 13:50:05.992843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.335 qpair failed and we were unable to recover it. 00:21:03.335 [2024-04-18 13:50:06.002669] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.335 [2024-04-18 13:50:06.002811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.335 [2024-04-18 13:50:06.002836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.335 [2024-04-18 13:50:06.002850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.335 [2024-04-18 13:50:06.002862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.335 [2024-04-18 13:50:06.002892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.335 qpair failed and we were unable to recover it. 00:21:03.335 [2024-04-18 13:50:06.012725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.336 [2024-04-18 13:50:06.012888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.336 [2024-04-18 13:50:06.012913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.336 [2024-04-18 13:50:06.012928] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.336 [2024-04-18 13:50:06.012941] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.336 [2024-04-18 13:50:06.012970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.336 qpair failed and we were unable to recover it. 00:21:03.336 [2024-04-18 13:50:06.022745] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.336 [2024-04-18 13:50:06.022901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.336 [2024-04-18 13:50:06.022927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.336 [2024-04-18 13:50:06.022942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.336 [2024-04-18 13:50:06.022954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.336 [2024-04-18 13:50:06.022982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.336 qpair failed and we were unable to recover it. 00:21:03.336 [2024-04-18 13:50:06.032707] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.336 [2024-04-18 13:50:06.032823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.336 [2024-04-18 13:50:06.032848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.336 [2024-04-18 13:50:06.032863] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.336 [2024-04-18 13:50:06.032875] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.336 [2024-04-18 13:50:06.032904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.336 qpair failed and we were unable to recover it. 00:21:03.336 [2024-04-18 13:50:06.042760] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.336 [2024-04-18 13:50:06.042871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.336 [2024-04-18 13:50:06.042897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.336 [2024-04-18 13:50:06.042912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.336 [2024-04-18 13:50:06.042924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.336 [2024-04-18 13:50:06.042953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.336 qpair failed and we were unable to recover it. 00:21:03.336 [2024-04-18 13:50:06.052768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.336 [2024-04-18 13:50:06.052915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.336 [2024-04-18 13:50:06.052942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.336 [2024-04-18 13:50:06.052957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.336 [2024-04-18 13:50:06.052969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.336 [2024-04-18 13:50:06.052999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.336 qpair failed and we were unable to recover it. 00:21:03.336 [2024-04-18 13:50:06.062812] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.336 [2024-04-18 13:50:06.062928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.336 [2024-04-18 13:50:06.062954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.336 [2024-04-18 13:50:06.062968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.336 [2024-04-18 13:50:06.062980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.336 [2024-04-18 13:50:06.063009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.336 qpair failed and we were unable to recover it. 00:21:03.336 [2024-04-18 13:50:06.072830] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.336 [2024-04-18 13:50:06.072945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.336 [2024-04-18 13:50:06.072971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.336 [2024-04-18 13:50:06.072985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.336 [2024-04-18 13:50:06.073002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.336 [2024-04-18 13:50:06.073032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.336 qpair failed and we were unable to recover it. 00:21:03.336 [2024-04-18 13:50:06.082893] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.336 [2024-04-18 13:50:06.083012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.336 [2024-04-18 13:50:06.083038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.336 [2024-04-18 13:50:06.083052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.336 [2024-04-18 13:50:06.083064] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.336 [2024-04-18 13:50:06.083092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.336 qpair failed and we were unable to recover it. 00:21:03.336 [2024-04-18 13:50:06.092891] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.336 [2024-04-18 13:50:06.093044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.336 [2024-04-18 13:50:06.093069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.336 [2024-04-18 13:50:06.093084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.336 [2024-04-18 13:50:06.093096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.336 [2024-04-18 13:50:06.093124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.336 qpair failed and we were unable to recover it. 00:21:03.336 [2024-04-18 13:50:06.102954] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.336 [2024-04-18 13:50:06.103086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.336 [2024-04-18 13:50:06.103112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.336 [2024-04-18 13:50:06.103127] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.336 [2024-04-18 13:50:06.103140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.336 [2024-04-18 13:50:06.103169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.336 qpair failed and we were unable to recover it. 00:21:03.336 [2024-04-18 13:50:06.112943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.336 [2024-04-18 13:50:06.113074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.336 [2024-04-18 13:50:06.113100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.336 [2024-04-18 13:50:06.113114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.336 [2024-04-18 13:50:06.113127] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.336 [2024-04-18 13:50:06.113172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.336 qpair failed and we were unable to recover it. 00:21:03.336 [2024-04-18 13:50:06.122964] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.336 [2024-04-18 13:50:06.123071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.336 [2024-04-18 13:50:06.123096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.336 [2024-04-18 13:50:06.123111] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.336 [2024-04-18 13:50:06.123123] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.336 [2024-04-18 13:50:06.123151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.336 qpair failed and we were unable to recover it. 00:21:03.336 [2024-04-18 13:50:06.133008] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.336 [2024-04-18 13:50:06.133115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.336 [2024-04-18 13:50:06.133140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.336 [2024-04-18 13:50:06.133154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.336 [2024-04-18 13:50:06.133166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.336 [2024-04-18 13:50:06.133219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.336 qpair failed and we were unable to recover it. 00:21:03.595 [2024-04-18 13:50:06.143071] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.595 [2024-04-18 13:50:06.143200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.595 [2024-04-18 13:50:06.143226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.595 [2024-04-18 13:50:06.143241] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.595 [2024-04-18 13:50:06.143254] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.595 [2024-04-18 13:50:06.143284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.595 qpair failed and we were unable to recover it. 00:21:03.595 [2024-04-18 13:50:06.153119] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.595 [2024-04-18 13:50:06.153285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.595 [2024-04-18 13:50:06.153312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.595 [2024-04-18 13:50:06.153327] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.595 [2024-04-18 13:50:06.153339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.595 [2024-04-18 13:50:06.153370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.595 qpair failed and we were unable to recover it. 00:21:03.595 [2024-04-18 13:50:06.163089] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.595 [2024-04-18 13:50:06.163237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.595 [2024-04-18 13:50:06.163264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.595 [2024-04-18 13:50:06.163284] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.595 [2024-04-18 13:50:06.163297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.595 [2024-04-18 13:50:06.163328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.595 qpair failed and we were unable to recover it. 00:21:03.595 [2024-04-18 13:50:06.173139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.595 [2024-04-18 13:50:06.173275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.595 [2024-04-18 13:50:06.173301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.595 [2024-04-18 13:50:06.173316] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.595 [2024-04-18 13:50:06.173329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.595 [2024-04-18 13:50:06.173360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.595 qpair failed and we were unable to recover it. 00:21:03.595 [2024-04-18 13:50:06.183138] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.595 [2024-04-18 13:50:06.183281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.595 [2024-04-18 13:50:06.183308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.595 [2024-04-18 13:50:06.183324] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.595 [2024-04-18 13:50:06.183336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.595 [2024-04-18 13:50:06.183366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.595 qpair failed and we were unable to recover it. 00:21:03.595 [2024-04-18 13:50:06.193264] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.595 [2024-04-18 13:50:06.193401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.595 [2024-04-18 13:50:06.193429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.596 [2024-04-18 13:50:06.193444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.596 [2024-04-18 13:50:06.193457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.596 [2024-04-18 13:50:06.193486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.596 qpair failed and we were unable to recover it. 00:21:03.596 [2024-04-18 13:50:06.203268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.596 [2024-04-18 13:50:06.203391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.596 [2024-04-18 13:50:06.203417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.596 [2024-04-18 13:50:06.203433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.596 [2024-04-18 13:50:06.203445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.596 [2024-04-18 13:50:06.203489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.596 qpair failed and we were unable to recover it. 00:21:03.596 [2024-04-18 13:50:06.213324] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.596 [2024-04-18 13:50:06.213442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.596 [2024-04-18 13:50:06.213468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.596 [2024-04-18 13:50:06.213483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.596 [2024-04-18 13:50:06.213511] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.596 [2024-04-18 13:50:06.213541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.596 qpair failed and we were unable to recover it. 00:21:03.596 [2024-04-18 13:50:06.223309] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.596 [2024-04-18 13:50:06.223495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.596 [2024-04-18 13:50:06.223521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.596 [2024-04-18 13:50:06.223536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.596 [2024-04-18 13:50:06.223548] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.596 [2024-04-18 13:50:06.223577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.596 qpair failed and we were unable to recover it. 00:21:03.596 [2024-04-18 13:50:06.233295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.596 [2024-04-18 13:50:06.233456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.596 [2024-04-18 13:50:06.233497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.596 [2024-04-18 13:50:06.233511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.596 [2024-04-18 13:50:06.233524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.596 [2024-04-18 13:50:06.233553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.596 qpair failed and we were unable to recover it. 00:21:03.596 [2024-04-18 13:50:06.243309] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.596 [2024-04-18 13:50:06.243427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.596 [2024-04-18 13:50:06.243453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.596 [2024-04-18 13:50:06.243469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.596 [2024-04-18 13:50:06.243496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.596 [2024-04-18 13:50:06.243526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.596 qpair failed and we were unable to recover it. 00:21:03.596 [2024-04-18 13:50:06.253365] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.596 [2024-04-18 13:50:06.253485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.596 [2024-04-18 13:50:06.253530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.596 [2024-04-18 13:50:06.253546] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.596 [2024-04-18 13:50:06.253558] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.596 [2024-04-18 13:50:06.253587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.596 qpair failed and we were unable to recover it. 00:21:03.596 [2024-04-18 13:50:06.263388] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.596 [2024-04-18 13:50:06.263520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.596 [2024-04-18 13:50:06.263545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.596 [2024-04-18 13:50:06.263560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.596 [2024-04-18 13:50:06.263572] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.596 [2024-04-18 13:50:06.263601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.596 qpair failed and we were unable to recover it. 00:21:03.596 [2024-04-18 13:50:06.273409] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.596 [2024-04-18 13:50:06.273531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.596 [2024-04-18 13:50:06.273557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.596 [2024-04-18 13:50:06.273572] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.596 [2024-04-18 13:50:06.273585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.596 [2024-04-18 13:50:06.273615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.596 qpair failed and we were unable to recover it. 00:21:03.596 [2024-04-18 13:50:06.283478] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.596 [2024-04-18 13:50:06.283646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.596 [2024-04-18 13:50:06.283671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.596 [2024-04-18 13:50:06.283686] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.596 [2024-04-18 13:50:06.283698] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.596 [2024-04-18 13:50:06.283727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.596 qpair failed and we were unable to recover it. 00:21:03.596 [2024-04-18 13:50:06.293535] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.596 [2024-04-18 13:50:06.293650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.596 [2024-04-18 13:50:06.293676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.596 [2024-04-18 13:50:06.293690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.596 [2024-04-18 13:50:06.293702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.596 [2024-04-18 13:50:06.293737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.596 qpair failed and we were unable to recover it. 00:21:03.596 [2024-04-18 13:50:06.303486] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.596 [2024-04-18 13:50:06.303620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.596 [2024-04-18 13:50:06.303645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.596 [2024-04-18 13:50:06.303660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.596 [2024-04-18 13:50:06.303672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.596 [2024-04-18 13:50:06.303701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.596 qpair failed and we were unable to recover it. 00:21:03.596 [2024-04-18 13:50:06.313533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.596 [2024-04-18 13:50:06.313647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.596 [2024-04-18 13:50:06.313672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.596 [2024-04-18 13:50:06.313687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.596 [2024-04-18 13:50:06.313699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.596 [2024-04-18 13:50:06.313728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.596 qpair failed and we were unable to recover it. 00:21:03.596 [2024-04-18 13:50:06.323539] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.596 [2024-04-18 13:50:06.323651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.596 [2024-04-18 13:50:06.323676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.596 [2024-04-18 13:50:06.323691] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.597 [2024-04-18 13:50:06.323703] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.597 [2024-04-18 13:50:06.323732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.597 qpair failed and we were unable to recover it. 00:21:03.597 [2024-04-18 13:50:06.333626] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.597 [2024-04-18 13:50:06.333767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.597 [2024-04-18 13:50:06.333792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.597 [2024-04-18 13:50:06.333807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.597 [2024-04-18 13:50:06.333819] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.597 [2024-04-18 13:50:06.333848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.597 qpair failed and we were unable to recover it. 00:21:03.597 [2024-04-18 13:50:06.343616] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.597 [2024-04-18 13:50:06.343733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.597 [2024-04-18 13:50:06.343763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.597 [2024-04-18 13:50:06.343779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.597 [2024-04-18 13:50:06.343791] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.597 [2024-04-18 13:50:06.343820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.597 qpair failed and we were unable to recover it. 00:21:03.597 [2024-04-18 13:50:06.353685] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.597 [2024-04-18 13:50:06.353833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.597 [2024-04-18 13:50:06.353860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.597 [2024-04-18 13:50:06.353875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.597 [2024-04-18 13:50:06.353887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.597 [2024-04-18 13:50:06.353915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.597 qpair failed and we were unable to recover it. 00:21:03.597 [2024-04-18 13:50:06.363682] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.597 [2024-04-18 13:50:06.363797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.597 [2024-04-18 13:50:06.363822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.597 [2024-04-18 13:50:06.363837] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.597 [2024-04-18 13:50:06.363849] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.597 [2024-04-18 13:50:06.363878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.597 qpair failed and we were unable to recover it. 00:21:03.597 [2024-04-18 13:50:06.373748] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.597 [2024-04-18 13:50:06.373898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.597 [2024-04-18 13:50:06.373925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.597 [2024-04-18 13:50:06.373940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.597 [2024-04-18 13:50:06.373952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.597 [2024-04-18 13:50:06.373982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.597 qpair failed and we were unable to recover it. 00:21:03.597 [2024-04-18 13:50:06.383740] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.597 [2024-04-18 13:50:06.383902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.597 [2024-04-18 13:50:06.383928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.597 [2024-04-18 13:50:06.383942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.597 [2024-04-18 13:50:06.383954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.597 [2024-04-18 13:50:06.383991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.597 qpair failed and we were unable to recover it. 00:21:03.597 [2024-04-18 13:50:06.393748] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.597 [2024-04-18 13:50:06.393858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.597 [2024-04-18 13:50:06.393884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.597 [2024-04-18 13:50:06.393899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.597 [2024-04-18 13:50:06.393911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.597 [2024-04-18 13:50:06.393940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.597 qpair failed and we were unable to recover it. 00:21:03.856 [2024-04-18 13:50:06.403788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.856 [2024-04-18 13:50:06.403914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.856 [2024-04-18 13:50:06.403940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.856 [2024-04-18 13:50:06.403954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.856 [2024-04-18 13:50:06.403967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.856 [2024-04-18 13:50:06.403996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.856 qpair failed and we were unable to recover it. 00:21:03.856 [2024-04-18 13:50:06.413819] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.856 [2024-04-18 13:50:06.413978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.856 [2024-04-18 13:50:06.414003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.856 [2024-04-18 13:50:06.414018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.856 [2024-04-18 13:50:06.414031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.856 [2024-04-18 13:50:06.414060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.856 qpair failed and we were unable to recover it. 00:21:03.856 [2024-04-18 13:50:06.423910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.856 [2024-04-18 13:50:06.424045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.856 [2024-04-18 13:50:06.424071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.856 [2024-04-18 13:50:06.424086] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.856 [2024-04-18 13:50:06.424100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.856 [2024-04-18 13:50:06.424130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.856 qpair failed and we were unable to recover it. 00:21:03.856 [2024-04-18 13:50:06.433875] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.856 [2024-04-18 13:50:06.434007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.856 [2024-04-18 13:50:06.434034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.856 [2024-04-18 13:50:06.434049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.856 [2024-04-18 13:50:06.434061] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.856 [2024-04-18 13:50:06.434091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.856 qpair failed and we were unable to recover it. 00:21:03.856 [2024-04-18 13:50:06.443898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.856 [2024-04-18 13:50:06.444014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.856 [2024-04-18 13:50:06.444041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.856 [2024-04-18 13:50:06.444056] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.856 [2024-04-18 13:50:06.444068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.856 [2024-04-18 13:50:06.444097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.856 qpair failed and we were unable to recover it. 00:21:03.856 [2024-04-18 13:50:06.453931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.856 [2024-04-18 13:50:06.454037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.856 [2024-04-18 13:50:06.454062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.856 [2024-04-18 13:50:06.454077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.856 [2024-04-18 13:50:06.454090] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.856 [2024-04-18 13:50:06.454119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.856 qpair failed and we were unable to recover it. 00:21:03.856 [2024-04-18 13:50:06.464012] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.856 [2024-04-18 13:50:06.464151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.856 [2024-04-18 13:50:06.464200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.856 [2024-04-18 13:50:06.464217] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.856 [2024-04-18 13:50:06.464230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.856 [2024-04-18 13:50:06.464261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.856 qpair failed and we were unable to recover it. 00:21:03.856 [2024-04-18 13:50:06.473980] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.856 [2024-04-18 13:50:06.474109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.856 [2024-04-18 13:50:06.474134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.856 [2024-04-18 13:50:06.474149] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.856 [2024-04-18 13:50:06.474190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.856 [2024-04-18 13:50:06.474223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.856 qpair failed and we were unable to recover it. 00:21:03.856 [2024-04-18 13:50:06.484014] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.856 [2024-04-18 13:50:06.484121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.856 [2024-04-18 13:50:06.484146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.856 [2024-04-18 13:50:06.484184] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.856 [2024-04-18 13:50:06.484200] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.856 [2024-04-18 13:50:06.484230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.856 qpair failed and we were unable to recover it. 00:21:03.856 [2024-04-18 13:50:06.494031] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.856 [2024-04-18 13:50:06.494208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.856 [2024-04-18 13:50:06.494235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.856 [2024-04-18 13:50:06.494251] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.856 [2024-04-18 13:50:06.494264] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.856 [2024-04-18 13:50:06.494293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.856 qpair failed and we were unable to recover it. 00:21:03.856 [2024-04-18 13:50:06.504112] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.856 [2024-04-18 13:50:06.504276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.856 [2024-04-18 13:50:06.504302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.856 [2024-04-18 13:50:06.504317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.856 [2024-04-18 13:50:06.504331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.856 [2024-04-18 13:50:06.504361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.856 qpair failed and we were unable to recover it. 00:21:03.856 [2024-04-18 13:50:06.514075] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.856 [2024-04-18 13:50:06.514200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.856 [2024-04-18 13:50:06.514228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.856 [2024-04-18 13:50:06.514243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.856 [2024-04-18 13:50:06.514256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.856 [2024-04-18 13:50:06.514287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.856 qpair failed and we were unable to recover it. 00:21:03.857 [2024-04-18 13:50:06.524122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.857 [2024-04-18 13:50:06.524255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.857 [2024-04-18 13:50:06.524282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.857 [2024-04-18 13:50:06.524297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.857 [2024-04-18 13:50:06.524310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.857 [2024-04-18 13:50:06.524340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.857 qpair failed and we were unable to recover it. 00:21:03.857 [2024-04-18 13:50:06.534175] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.857 [2024-04-18 13:50:06.534323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.857 [2024-04-18 13:50:06.534349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.857 [2024-04-18 13:50:06.534364] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.857 [2024-04-18 13:50:06.534378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.857 [2024-04-18 13:50:06.534408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.857 qpair failed and we were unable to recover it. 00:21:03.857 [2024-04-18 13:50:06.544230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.857 [2024-04-18 13:50:06.544366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.857 [2024-04-18 13:50:06.544393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.857 [2024-04-18 13:50:06.544408] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.857 [2024-04-18 13:50:06.544421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.857 [2024-04-18 13:50:06.544451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.857 qpair failed and we were unable to recover it. 00:21:03.857 [2024-04-18 13:50:06.554243] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.857 [2024-04-18 13:50:06.554368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.857 [2024-04-18 13:50:06.554394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.857 [2024-04-18 13:50:06.554410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.857 [2024-04-18 13:50:06.554422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.857 [2024-04-18 13:50:06.554452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.857 qpair failed and we were unable to recover it. 00:21:03.857 [2024-04-18 13:50:06.564270] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.857 [2024-04-18 13:50:06.564386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.857 [2024-04-18 13:50:06.564412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.857 [2024-04-18 13:50:06.564433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.857 [2024-04-18 13:50:06.564447] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.857 [2024-04-18 13:50:06.564491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.857 qpair failed and we were unable to recover it. 00:21:03.857 [2024-04-18 13:50:06.574305] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.857 [2024-04-18 13:50:06.574418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.857 [2024-04-18 13:50:06.574445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.857 [2024-04-18 13:50:06.574461] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.857 [2024-04-18 13:50:06.574474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.857 [2024-04-18 13:50:06.574518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.857 qpair failed and we were unable to recover it. 00:21:03.857 [2024-04-18 13:50:06.584356] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.857 [2024-04-18 13:50:06.584489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.857 [2024-04-18 13:50:06.584514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.857 [2024-04-18 13:50:06.584529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.857 [2024-04-18 13:50:06.584542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.857 [2024-04-18 13:50:06.584570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.857 qpair failed and we were unable to recover it. 00:21:03.857 [2024-04-18 13:50:06.594389] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.857 [2024-04-18 13:50:06.594551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.857 [2024-04-18 13:50:06.594576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.857 [2024-04-18 13:50:06.594591] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.857 [2024-04-18 13:50:06.594603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.857 [2024-04-18 13:50:06.594632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.857 qpair failed and we were unable to recover it. 00:21:03.857 [2024-04-18 13:50:06.604400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.857 [2024-04-18 13:50:06.604530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.857 [2024-04-18 13:50:06.604555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.857 [2024-04-18 13:50:06.604570] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.857 [2024-04-18 13:50:06.604582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.857 [2024-04-18 13:50:06.604611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.857 qpair failed and we were unable to recover it. 00:21:03.857 [2024-04-18 13:50:06.614417] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.857 [2024-04-18 13:50:06.614549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.857 [2024-04-18 13:50:06.614575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.857 [2024-04-18 13:50:06.614590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.857 [2024-04-18 13:50:06.614602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.857 [2024-04-18 13:50:06.614631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.857 qpair failed and we were unable to recover it. 00:21:03.857 [2024-04-18 13:50:06.624485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.857 [2024-04-18 13:50:06.624605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.857 [2024-04-18 13:50:06.624631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.857 [2024-04-18 13:50:06.624645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.857 [2024-04-18 13:50:06.624658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.857 [2024-04-18 13:50:06.624687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.857 qpair failed and we were unable to recover it. 00:21:03.857 [2024-04-18 13:50:06.634467] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.857 [2024-04-18 13:50:06.634591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.857 [2024-04-18 13:50:06.634617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.857 [2024-04-18 13:50:06.634632] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.857 [2024-04-18 13:50:06.634644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.857 [2024-04-18 13:50:06.634673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.857 qpair failed and we were unable to recover it. 00:21:03.857 [2024-04-18 13:50:06.644564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.857 [2024-04-18 13:50:06.644716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.857 [2024-04-18 13:50:06.644741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.857 [2024-04-18 13:50:06.644757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.857 [2024-04-18 13:50:06.644769] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.857 [2024-04-18 13:50:06.644798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.857 qpair failed and we were unable to recover it. 00:21:03.857 [2024-04-18 13:50:06.654542] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.857 [2024-04-18 13:50:06.654656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.858 [2024-04-18 13:50:06.654689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.858 [2024-04-18 13:50:06.654706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.858 [2024-04-18 13:50:06.654718] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:03.858 [2024-04-18 13:50:06.654747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:03.858 qpair failed and we were unable to recover it. 00:21:04.117 [2024-04-18 13:50:06.664577] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.117 [2024-04-18 13:50:06.664700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.117 [2024-04-18 13:50:06.664725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.117 [2024-04-18 13:50:06.664740] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.117 [2024-04-18 13:50:06.664753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.117 [2024-04-18 13:50:06.664781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.117 qpair failed and we were unable to recover it. 00:21:04.117 [2024-04-18 13:50:06.674617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.117 [2024-04-18 13:50:06.674783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.117 [2024-04-18 13:50:06.674810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.117 [2024-04-18 13:50:06.674826] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.117 [2024-04-18 13:50:06.674838] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.117 [2024-04-18 13:50:06.674868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.117 qpair failed and we were unable to recover it. 00:21:04.117 [2024-04-18 13:50:06.684620] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.117 [2024-04-18 13:50:06.684742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.117 [2024-04-18 13:50:06.684768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.117 [2024-04-18 13:50:06.684783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.117 [2024-04-18 13:50:06.684795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.117 [2024-04-18 13:50:06.684825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.117 qpair failed and we were unable to recover it. 00:21:04.117 [2024-04-18 13:50:06.694648] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.117 [2024-04-18 13:50:06.694758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.117 [2024-04-18 13:50:06.694783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.117 [2024-04-18 13:50:06.694798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.117 [2024-04-18 13:50:06.694810] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.117 [2024-04-18 13:50:06.694845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.117 qpair failed and we were unable to recover it. 00:21:04.117 [2024-04-18 13:50:06.704717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.117 [2024-04-18 13:50:06.704836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.118 [2024-04-18 13:50:06.704863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.118 [2024-04-18 13:50:06.704878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.118 [2024-04-18 13:50:06.704890] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.118 [2024-04-18 13:50:06.704919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.118 qpair failed and we were unable to recover it. 00:21:04.118 [2024-04-18 13:50:06.714743] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.118 [2024-04-18 13:50:06.714865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.118 [2024-04-18 13:50:06.714892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.118 [2024-04-18 13:50:06.714907] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.118 [2024-04-18 13:50:06.714919] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.118 [2024-04-18 13:50:06.714948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.118 qpair failed and we were unable to recover it. 00:21:04.118 [2024-04-18 13:50:06.724771] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.118 [2024-04-18 13:50:06.724884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.118 [2024-04-18 13:50:06.724910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.118 [2024-04-18 13:50:06.724925] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.118 [2024-04-18 13:50:06.724937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.118 [2024-04-18 13:50:06.724966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.118 qpair failed and we were unable to recover it. 00:21:04.118 [2024-04-18 13:50:06.734744] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.118 [2024-04-18 13:50:06.734857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.118 [2024-04-18 13:50:06.734882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.118 [2024-04-18 13:50:06.734898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.118 [2024-04-18 13:50:06.734910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.118 [2024-04-18 13:50:06.734939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.118 qpair failed and we were unable to recover it. 00:21:04.118 [2024-04-18 13:50:06.744804] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.118 [2024-04-18 13:50:06.744921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.118 [2024-04-18 13:50:06.744951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.118 [2024-04-18 13:50:06.744966] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.118 [2024-04-18 13:50:06.744978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.118 [2024-04-18 13:50:06.745007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.118 qpair failed and we were unable to recover it. 00:21:04.118 [2024-04-18 13:50:06.754838] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.118 [2024-04-18 13:50:06.754957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.118 [2024-04-18 13:50:06.754983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.118 [2024-04-18 13:50:06.754998] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.118 [2024-04-18 13:50:06.755011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.118 [2024-04-18 13:50:06.755039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.118 qpair failed and we were unable to recover it. 00:21:04.118 [2024-04-18 13:50:06.764833] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.118 [2024-04-18 13:50:06.764946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.118 [2024-04-18 13:50:06.764971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.118 [2024-04-18 13:50:06.764986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.118 [2024-04-18 13:50:06.764998] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.118 [2024-04-18 13:50:06.765027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.118 qpair failed and we were unable to recover it. 00:21:04.118 [2024-04-18 13:50:06.774850] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.118 [2024-04-18 13:50:06.774957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.118 [2024-04-18 13:50:06.774983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.118 [2024-04-18 13:50:06.774998] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.118 [2024-04-18 13:50:06.775010] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.118 [2024-04-18 13:50:06.775039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.118 qpair failed and we were unable to recover it. 00:21:04.118 [2024-04-18 13:50:06.784973] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.118 [2024-04-18 13:50:06.785110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.118 [2024-04-18 13:50:06.785136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.118 [2024-04-18 13:50:06.785152] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.118 [2024-04-18 13:50:06.785165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.118 [2024-04-18 13:50:06.785209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.118 qpair failed and we were unable to recover it. 00:21:04.118 [2024-04-18 13:50:06.794914] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.118 [2024-04-18 13:50:06.795036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.118 [2024-04-18 13:50:06.795062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.118 [2024-04-18 13:50:06.795077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.118 [2024-04-18 13:50:06.795089] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.118 [2024-04-18 13:50:06.795118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.118 qpair failed and we were unable to recover it. 00:21:04.118 [2024-04-18 13:50:06.804937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.118 [2024-04-18 13:50:06.805048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.118 [2024-04-18 13:50:06.805074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.118 [2024-04-18 13:50:06.805089] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.118 [2024-04-18 13:50:06.805101] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.118 [2024-04-18 13:50:06.805130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.118 qpair failed and we were unable to recover it. 00:21:04.119 [2024-04-18 13:50:06.814966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.119 [2024-04-18 13:50:06.815074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.119 [2024-04-18 13:50:06.815100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.119 [2024-04-18 13:50:06.815115] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.119 [2024-04-18 13:50:06.815128] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.119 [2024-04-18 13:50:06.815171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.119 qpair failed and we were unable to recover it. 00:21:04.119 [2024-04-18 13:50:06.825008] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.119 [2024-04-18 13:50:06.825133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.119 [2024-04-18 13:50:06.825173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.119 [2024-04-18 13:50:06.825198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.119 [2024-04-18 13:50:06.825211] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.119 [2024-04-18 13:50:06.825242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.119 qpair failed and we were unable to recover it. 00:21:04.119 [2024-04-18 13:50:06.835026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.119 [2024-04-18 13:50:06.835151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.119 [2024-04-18 13:50:06.835204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.119 [2024-04-18 13:50:06.835222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.119 [2024-04-18 13:50:06.835235] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.119 [2024-04-18 13:50:06.835266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.119 qpair failed and we were unable to recover it. 00:21:04.119 [2024-04-18 13:50:06.845072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.119 [2024-04-18 13:50:06.845209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.119 [2024-04-18 13:50:06.845237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.119 [2024-04-18 13:50:06.845253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.119 [2024-04-18 13:50:06.845266] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.119 [2024-04-18 13:50:06.845306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.119 qpair failed and we were unable to recover it. 00:21:04.119 [2024-04-18 13:50:06.855129] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.119 [2024-04-18 13:50:06.855298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.119 [2024-04-18 13:50:06.855324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.119 [2024-04-18 13:50:06.855340] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.119 [2024-04-18 13:50:06.855352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.119 [2024-04-18 13:50:06.855382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.119 qpair failed and we were unable to recover it. 00:21:04.119 [2024-04-18 13:50:06.865151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.119 [2024-04-18 13:50:06.865316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.119 [2024-04-18 13:50:06.865343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.119 [2024-04-18 13:50:06.865362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.119 [2024-04-18 13:50:06.865375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.119 [2024-04-18 13:50:06.865404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.119 qpair failed and we were unable to recover it. 00:21:04.119 [2024-04-18 13:50:06.875142] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.119 [2024-04-18 13:50:06.875291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.119 [2024-04-18 13:50:06.875316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.119 [2024-04-18 13:50:06.875331] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.119 [2024-04-18 13:50:06.875349] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.119 [2024-04-18 13:50:06.875391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.119 qpair failed and we were unable to recover it. 00:21:04.119 [2024-04-18 13:50:06.885201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.119 [2024-04-18 13:50:06.885317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.119 [2024-04-18 13:50:06.885344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.119 [2024-04-18 13:50:06.885359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.119 [2024-04-18 13:50:06.885373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.119 [2024-04-18 13:50:06.885403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.119 qpair failed and we were unable to recover it. 00:21:04.119 [2024-04-18 13:50:06.895240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.119 [2024-04-18 13:50:06.895353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.119 [2024-04-18 13:50:06.895380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.119 [2024-04-18 13:50:06.895396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.119 [2024-04-18 13:50:06.895410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.119 [2024-04-18 13:50:06.895440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.119 qpair failed and we were unable to recover it. 00:21:04.119 [2024-04-18 13:50:06.905258] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.119 [2024-04-18 13:50:06.905385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.119 [2024-04-18 13:50:06.905412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.119 [2024-04-18 13:50:06.905427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.119 [2024-04-18 13:50:06.905440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.119 [2024-04-18 13:50:06.905482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.119 qpair failed and we were unable to recover it. 00:21:04.119 [2024-04-18 13:50:06.915288] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.119 [2024-04-18 13:50:06.915408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.119 [2024-04-18 13:50:06.915435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.119 [2024-04-18 13:50:06.915451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.120 [2024-04-18 13:50:06.915463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.120 [2024-04-18 13:50:06.915518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.120 qpair failed and we were unable to recover it. 00:21:04.378 [2024-04-18 13:50:06.925335] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.379 [2024-04-18 13:50:06.925465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.379 [2024-04-18 13:50:06.925489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.379 [2024-04-18 13:50:06.925504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.379 [2024-04-18 13:50:06.925517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.379 [2024-04-18 13:50:06.925546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.379 qpair failed and we were unable to recover it. 00:21:04.379 [2024-04-18 13:50:06.935360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.379 [2024-04-18 13:50:06.935480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.379 [2024-04-18 13:50:06.935506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.379 [2024-04-18 13:50:06.935521] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.379 [2024-04-18 13:50:06.935534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.379 [2024-04-18 13:50:06.935565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.379 qpair failed and we were unable to recover it. 00:21:04.379 [2024-04-18 13:50:06.945427] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.379 [2024-04-18 13:50:06.945587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.379 [2024-04-18 13:50:06.945611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.379 [2024-04-18 13:50:06.945625] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.379 [2024-04-18 13:50:06.945638] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.379 [2024-04-18 13:50:06.945666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.379 qpair failed and we were unable to recover it. 00:21:04.379 [2024-04-18 13:50:06.955403] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.379 [2024-04-18 13:50:06.955548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.379 [2024-04-18 13:50:06.955573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.379 [2024-04-18 13:50:06.955588] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.379 [2024-04-18 13:50:06.955600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.379 [2024-04-18 13:50:06.955637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.379 qpair failed and we were unable to recover it. 00:21:04.379 [2024-04-18 13:50:06.965431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.379 [2024-04-18 13:50:06.965592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.379 [2024-04-18 13:50:06.965617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.379 [2024-04-18 13:50:06.965636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.379 [2024-04-18 13:50:06.965649] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.379 [2024-04-18 13:50:06.965678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.379 qpair failed and we were unable to recover it. 00:21:04.379 [2024-04-18 13:50:06.975464] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.379 [2024-04-18 13:50:06.975600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.379 [2024-04-18 13:50:06.975627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.379 [2024-04-18 13:50:06.975642] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.379 [2024-04-18 13:50:06.975655] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.379 [2024-04-18 13:50:06.975684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.379 qpair failed and we were unable to recover it. 00:21:04.379 [2024-04-18 13:50:06.985543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.379 [2024-04-18 13:50:06.985684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.379 [2024-04-18 13:50:06.985710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.379 [2024-04-18 13:50:06.985727] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.379 [2024-04-18 13:50:06.985739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.379 [2024-04-18 13:50:06.985768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.379 qpair failed and we were unable to recover it. 00:21:04.379 [2024-04-18 13:50:06.995550] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.379 [2024-04-18 13:50:06.995675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.379 [2024-04-18 13:50:06.995700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.379 [2024-04-18 13:50:06.995725] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.379 [2024-04-18 13:50:06.995737] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.379 [2024-04-18 13:50:06.995766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.379 qpair failed and we were unable to recover it. 00:21:04.379 [2024-04-18 13:50:07.005551] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.379 [2024-04-18 13:50:07.005661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.379 [2024-04-18 13:50:07.005684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.379 [2024-04-18 13:50:07.005699] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.379 [2024-04-18 13:50:07.005712] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.379 [2024-04-18 13:50:07.005741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.379 qpair failed and we were unable to recover it. 00:21:04.379 [2024-04-18 13:50:07.015607] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.379 [2024-04-18 13:50:07.015721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.379 [2024-04-18 13:50:07.015746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.379 [2024-04-18 13:50:07.015761] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.379 [2024-04-18 13:50:07.015774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.379 [2024-04-18 13:50:07.015802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.379 qpair failed and we were unable to recover it. 00:21:04.379 [2024-04-18 13:50:07.025671] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.379 [2024-04-18 13:50:07.025788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.379 [2024-04-18 13:50:07.025814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.379 [2024-04-18 13:50:07.025829] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.379 [2024-04-18 13:50:07.025841] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.379 [2024-04-18 13:50:07.025878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.379 qpair failed and we were unable to recover it. 00:21:04.380 [2024-04-18 13:50:07.035718] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.380 [2024-04-18 13:50:07.035842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.380 [2024-04-18 13:50:07.035868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.380 [2024-04-18 13:50:07.035883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.380 [2024-04-18 13:50:07.035895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.380 [2024-04-18 13:50:07.035931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.380 qpair failed and we were unable to recover it. 00:21:04.380 [2024-04-18 13:50:07.045690] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.380 [2024-04-18 13:50:07.045835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.380 [2024-04-18 13:50:07.045860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.380 [2024-04-18 13:50:07.045875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.380 [2024-04-18 13:50:07.045887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.380 [2024-04-18 13:50:07.045916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.380 qpair failed and we were unable to recover it. 00:21:04.380 [2024-04-18 13:50:07.055784] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.380 [2024-04-18 13:50:07.055909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.380 [2024-04-18 13:50:07.055933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.380 [2024-04-18 13:50:07.055953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.380 [2024-04-18 13:50:07.055966] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.380 [2024-04-18 13:50:07.055995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.380 qpair failed and we were unable to recover it. 00:21:04.380 [2024-04-18 13:50:07.065823] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.380 [2024-04-18 13:50:07.065939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.380 [2024-04-18 13:50:07.065966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.380 [2024-04-18 13:50:07.065981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.380 [2024-04-18 13:50:07.065993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.380 [2024-04-18 13:50:07.066029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.380 qpair failed and we were unable to recover it. 00:21:04.380 [2024-04-18 13:50:07.075813] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.380 [2024-04-18 13:50:07.075926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.380 [2024-04-18 13:50:07.075951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.380 [2024-04-18 13:50:07.075966] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.380 [2024-04-18 13:50:07.075978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.380 [2024-04-18 13:50:07.076018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.380 qpair failed and we were unable to recover it. 00:21:04.380 [2024-04-18 13:50:07.085806] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.380 [2024-04-18 13:50:07.085914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.380 [2024-04-18 13:50:07.085939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.380 [2024-04-18 13:50:07.085953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.380 [2024-04-18 13:50:07.085965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.380 [2024-04-18 13:50:07.085994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.380 qpair failed and we were unable to recover it. 00:21:04.380 [2024-04-18 13:50:07.095826] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.380 [2024-04-18 13:50:07.095936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.380 [2024-04-18 13:50:07.095960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.380 [2024-04-18 13:50:07.095975] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.380 [2024-04-18 13:50:07.095988] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.380 [2024-04-18 13:50:07.096017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.380 qpair failed and we were unable to recover it. 00:21:04.380 [2024-04-18 13:50:07.105963] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.380 [2024-04-18 13:50:07.106079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.380 [2024-04-18 13:50:07.106103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.380 [2024-04-18 13:50:07.106118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.380 [2024-04-18 13:50:07.106131] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.380 [2024-04-18 13:50:07.106174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.380 qpair failed and we were unable to recover it. 00:21:04.380 [2024-04-18 13:50:07.115923] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.380 [2024-04-18 13:50:07.116035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.380 [2024-04-18 13:50:07.116060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.380 [2024-04-18 13:50:07.116074] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.380 [2024-04-18 13:50:07.116086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.380 [2024-04-18 13:50:07.116115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.380 qpair failed and we were unable to recover it. 00:21:04.380 [2024-04-18 13:50:07.125919] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.380 [2024-04-18 13:50:07.126031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.380 [2024-04-18 13:50:07.126055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.380 [2024-04-18 13:50:07.126070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.380 [2024-04-18 13:50:07.126083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.380 [2024-04-18 13:50:07.126111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.380 qpair failed and we were unable to recover it. 00:21:04.380 [2024-04-18 13:50:07.135965] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.380 [2024-04-18 13:50:07.136074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.380 [2024-04-18 13:50:07.136099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.380 [2024-04-18 13:50:07.136113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.380 [2024-04-18 13:50:07.136126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.380 [2024-04-18 13:50:07.136155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.380 qpair failed and we were unable to recover it. 00:21:04.380 [2024-04-18 13:50:07.145984] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.380 [2024-04-18 13:50:07.146103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.380 [2024-04-18 13:50:07.146133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.380 [2024-04-18 13:50:07.146148] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.380 [2024-04-18 13:50:07.146183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.381 [2024-04-18 13:50:07.146217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.381 qpair failed and we were unable to recover it. 00:21:04.381 [2024-04-18 13:50:07.156004] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.381 [2024-04-18 13:50:07.156121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.381 [2024-04-18 13:50:07.156144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.381 [2024-04-18 13:50:07.156174] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.381 [2024-04-18 13:50:07.156199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.381 [2024-04-18 13:50:07.156231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.381 qpair failed and we were unable to recover it. 00:21:04.381 [2024-04-18 13:50:07.166025] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.381 [2024-04-18 13:50:07.166148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.381 [2024-04-18 13:50:07.166195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.381 [2024-04-18 13:50:07.166212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.381 [2024-04-18 13:50:07.166225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.381 [2024-04-18 13:50:07.166256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.381 qpair failed and we were unable to recover it. 00:21:04.381 [2024-04-18 13:50:07.176068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.381 [2024-04-18 13:50:07.176292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.381 [2024-04-18 13:50:07.176317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.381 [2024-04-18 13:50:07.176332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.381 [2024-04-18 13:50:07.176345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.381 [2024-04-18 13:50:07.176375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.381 qpair failed and we were unable to recover it. 00:21:04.640 [2024-04-18 13:50:07.186125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.640 [2024-04-18 13:50:07.186258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.640 [2024-04-18 13:50:07.186284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.640 [2024-04-18 13:50:07.186300] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.640 [2024-04-18 13:50:07.186313] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.640 [2024-04-18 13:50:07.186348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.640 qpair failed and we were unable to recover it. 00:21:04.640 [2024-04-18 13:50:07.196125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.640 [2024-04-18 13:50:07.196308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.640 [2024-04-18 13:50:07.196336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.640 [2024-04-18 13:50:07.196352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.640 [2024-04-18 13:50:07.196365] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.640 [2024-04-18 13:50:07.196395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.640 qpair failed and we were unable to recover it. 00:21:04.640 [2024-04-18 13:50:07.206092] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.640 [2024-04-18 13:50:07.206241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.640 [2024-04-18 13:50:07.206267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.640 [2024-04-18 13:50:07.206282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.640 [2024-04-18 13:50:07.206295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.640 [2024-04-18 13:50:07.206325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.640 qpair failed and we were unable to recover it. 00:21:04.640 [2024-04-18 13:50:07.216190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.640 [2024-04-18 13:50:07.216356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.640 [2024-04-18 13:50:07.216382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.640 [2024-04-18 13:50:07.216397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.640 [2024-04-18 13:50:07.216410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.640 [2024-04-18 13:50:07.216441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.640 qpair failed and we were unable to recover it. 00:21:04.640 [2024-04-18 13:50:07.226234] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.640 [2024-04-18 13:50:07.226356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.640 [2024-04-18 13:50:07.226381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.640 [2024-04-18 13:50:07.226397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.640 [2024-04-18 13:50:07.226409] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.640 [2024-04-18 13:50:07.226440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.640 qpair failed and we were unable to recover it. 00:21:04.640 [2024-04-18 13:50:07.236241] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.640 [2024-04-18 13:50:07.236365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.640 [2024-04-18 13:50:07.236394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.640 [2024-04-18 13:50:07.236410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.640 [2024-04-18 13:50:07.236423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.640 [2024-04-18 13:50:07.236454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.640 qpair failed and we were unable to recover it. 00:21:04.640 [2024-04-18 13:50:07.246280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.640 [2024-04-18 13:50:07.246399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.640 [2024-04-18 13:50:07.246424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.640 [2024-04-18 13:50:07.246439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.640 [2024-04-18 13:50:07.246452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.640 [2024-04-18 13:50:07.246497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.640 qpair failed and we were unable to recover it. 00:21:04.640 [2024-04-18 13:50:07.256275] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.640 [2024-04-18 13:50:07.256431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.640 [2024-04-18 13:50:07.256456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.640 [2024-04-18 13:50:07.256485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.640 [2024-04-18 13:50:07.256499] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.640 [2024-04-18 13:50:07.256527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.640 qpair failed and we were unable to recover it. 00:21:04.640 [2024-04-18 13:50:07.266335] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.640 [2024-04-18 13:50:07.266516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.640 [2024-04-18 13:50:07.266540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.640 [2024-04-18 13:50:07.266555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.640 [2024-04-18 13:50:07.266568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.640 [2024-04-18 13:50:07.266597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.640 qpair failed and we were unable to recover it. 00:21:04.640 [2024-04-18 13:50:07.276371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.640 [2024-04-18 13:50:07.276492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.640 [2024-04-18 13:50:07.276516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.640 [2024-04-18 13:50:07.276547] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.640 [2024-04-18 13:50:07.276565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.640 [2024-04-18 13:50:07.276594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.640 qpair failed and we were unable to recover it. 00:21:04.640 [2024-04-18 13:50:07.286386] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.640 [2024-04-18 13:50:07.286519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.640 [2024-04-18 13:50:07.286543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.641 [2024-04-18 13:50:07.286558] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.641 [2024-04-18 13:50:07.286570] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.641 [2024-04-18 13:50:07.286600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.641 qpair failed and we were unable to recover it. 00:21:04.641 [2024-04-18 13:50:07.296379] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.641 [2024-04-18 13:50:07.296514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.641 [2024-04-18 13:50:07.296539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.641 [2024-04-18 13:50:07.296554] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.641 [2024-04-18 13:50:07.296566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.641 [2024-04-18 13:50:07.296595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.641 qpair failed and we were unable to recover it. 00:21:04.641 [2024-04-18 13:50:07.306449] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.641 [2024-04-18 13:50:07.306596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.641 [2024-04-18 13:50:07.306620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.641 [2024-04-18 13:50:07.306635] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.641 [2024-04-18 13:50:07.306647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.641 [2024-04-18 13:50:07.306676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.641 qpair failed and we were unable to recover it. 00:21:04.641 [2024-04-18 13:50:07.316499] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.641 [2024-04-18 13:50:07.316613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.641 [2024-04-18 13:50:07.316637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.641 [2024-04-18 13:50:07.316652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.641 [2024-04-18 13:50:07.316665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.641 [2024-04-18 13:50:07.316694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.641 qpair failed and we were unable to recover it. 00:21:04.641 [2024-04-18 13:50:07.326515] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.641 [2024-04-18 13:50:07.326640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.641 [2024-04-18 13:50:07.326664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.641 [2024-04-18 13:50:07.326679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.641 [2024-04-18 13:50:07.326691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.641 [2024-04-18 13:50:07.326720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.641 qpair failed and we were unable to recover it. 00:21:04.641 [2024-04-18 13:50:07.336530] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.641 [2024-04-18 13:50:07.336654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.641 [2024-04-18 13:50:07.336678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.641 [2024-04-18 13:50:07.336694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.641 [2024-04-18 13:50:07.336706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.641 [2024-04-18 13:50:07.336735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.641 qpair failed and we were unable to recover it. 00:21:04.641 [2024-04-18 13:50:07.346569] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.641 [2024-04-18 13:50:07.346746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.641 [2024-04-18 13:50:07.346771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.641 [2024-04-18 13:50:07.346785] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.641 [2024-04-18 13:50:07.346798] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.641 [2024-04-18 13:50:07.346828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.641 qpair failed and we were unable to recover it. 00:21:04.641 [2024-04-18 13:50:07.356574] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.641 [2024-04-18 13:50:07.356691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.641 [2024-04-18 13:50:07.356715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.641 [2024-04-18 13:50:07.356730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.641 [2024-04-18 13:50:07.356742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.641 [2024-04-18 13:50:07.356783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.641 qpair failed and we were unable to recover it. 00:21:04.641 [2024-04-18 13:50:07.366623] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.641 [2024-04-18 13:50:07.366781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.641 [2024-04-18 13:50:07.366807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.641 [2024-04-18 13:50:07.366827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.641 [2024-04-18 13:50:07.366840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.641 [2024-04-18 13:50:07.366870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.641 qpair failed and we were unable to recover it. 00:21:04.641 [2024-04-18 13:50:07.376588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.641 [2024-04-18 13:50:07.376696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.641 [2024-04-18 13:50:07.376721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.641 [2024-04-18 13:50:07.376735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.641 [2024-04-18 13:50:07.376748] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.641 [2024-04-18 13:50:07.376777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.641 qpair failed and we were unable to recover it. 00:21:04.641 [2024-04-18 13:50:07.386645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.641 [2024-04-18 13:50:07.386805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.641 [2024-04-18 13:50:07.386832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.641 [2024-04-18 13:50:07.386847] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.641 [2024-04-18 13:50:07.386859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.641 [2024-04-18 13:50:07.386888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.641 qpair failed and we were unable to recover it. 00:21:04.641 [2024-04-18 13:50:07.396668] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.641 [2024-04-18 13:50:07.396782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.641 [2024-04-18 13:50:07.396806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.641 [2024-04-18 13:50:07.396821] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.641 [2024-04-18 13:50:07.396834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.641 [2024-04-18 13:50:07.396862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.641 qpair failed and we were unable to recover it. 00:21:04.641 [2024-04-18 13:50:07.406707] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.641 [2024-04-18 13:50:07.406837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.641 [2024-04-18 13:50:07.406863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.641 [2024-04-18 13:50:07.406878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.641 [2024-04-18 13:50:07.406891] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.641 [2024-04-18 13:50:07.406920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.641 qpair failed and we were unable to recover it. 00:21:04.641 [2024-04-18 13:50:07.416699] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.642 [2024-04-18 13:50:07.416810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.642 [2024-04-18 13:50:07.416835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.642 [2024-04-18 13:50:07.416850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.642 [2024-04-18 13:50:07.416862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.642 [2024-04-18 13:50:07.416890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.642 qpair failed and we were unable to recover it. 00:21:04.642 [2024-04-18 13:50:07.426788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.642 [2024-04-18 13:50:07.426919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.642 [2024-04-18 13:50:07.426943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.642 [2024-04-18 13:50:07.426958] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.642 [2024-04-18 13:50:07.426983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.642 [2024-04-18 13:50:07.427014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.642 qpair failed and we were unable to recover it. 00:21:04.642 [2024-04-18 13:50:07.436755] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.642 [2024-04-18 13:50:07.436909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.642 [2024-04-18 13:50:07.436936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.642 [2024-04-18 13:50:07.436951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.642 [2024-04-18 13:50:07.436963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.642 [2024-04-18 13:50:07.436992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.642 qpair failed and we were unable to recover it. 00:21:04.901 [2024-04-18 13:50:07.446820] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.901 [2024-04-18 13:50:07.447004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.901 [2024-04-18 13:50:07.447031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.901 [2024-04-18 13:50:07.447046] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.901 [2024-04-18 13:50:07.447059] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.901 [2024-04-18 13:50:07.447088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.901 qpair failed and we were unable to recover it. 00:21:04.901 [2024-04-18 13:50:07.456831] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.901 [2024-04-18 13:50:07.456990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.901 [2024-04-18 13:50:07.457016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.901 [2024-04-18 13:50:07.457037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.901 [2024-04-18 13:50:07.457050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.901 [2024-04-18 13:50:07.457079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.901 qpair failed and we were unable to recover it. 00:21:04.901 [2024-04-18 13:50:07.466879] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.901 [2024-04-18 13:50:07.467013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.901 [2024-04-18 13:50:07.467039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.901 [2024-04-18 13:50:07.467054] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.901 [2024-04-18 13:50:07.467066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.901 [2024-04-18 13:50:07.467096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.901 qpair failed and we were unable to recover it. 00:21:04.901 [2024-04-18 13:50:07.476868] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.901 [2024-04-18 13:50:07.476979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.901 [2024-04-18 13:50:07.477004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.901 [2024-04-18 13:50:07.477018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.901 [2024-04-18 13:50:07.477031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.901 [2024-04-18 13:50:07.477059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.901 qpair failed and we were unable to recover it. 00:21:04.901 [2024-04-18 13:50:07.486911] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.901 [2024-04-18 13:50:07.487020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.901 [2024-04-18 13:50:07.487044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.901 [2024-04-18 13:50:07.487059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.901 [2024-04-18 13:50:07.487071] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.901 [2024-04-18 13:50:07.487100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.901 qpair failed and we were unable to recover it. 00:21:04.901 [2024-04-18 13:50:07.496927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.901 [2024-04-18 13:50:07.497082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.901 [2024-04-18 13:50:07.497107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.901 [2024-04-18 13:50:07.497122] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.901 [2024-04-18 13:50:07.497135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.901 [2024-04-18 13:50:07.497188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.901 qpair failed and we were unable to recover it. 00:21:04.901 [2024-04-18 13:50:07.507006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.901 [2024-04-18 13:50:07.507126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.901 [2024-04-18 13:50:07.507151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.901 [2024-04-18 13:50:07.507188] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.901 [2024-04-18 13:50:07.507203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.901 [2024-04-18 13:50:07.507234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.901 qpair failed and we were unable to recover it. 00:21:04.901 [2024-04-18 13:50:07.517083] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.901 [2024-04-18 13:50:07.517237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.901 [2024-04-18 13:50:07.517263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.901 [2024-04-18 13:50:07.517277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.901 [2024-04-18 13:50:07.517290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.901 [2024-04-18 13:50:07.517320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.901 qpair failed and we were unable to recover it. 00:21:04.901 [2024-04-18 13:50:07.527092] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.901 [2024-04-18 13:50:07.527247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.901 [2024-04-18 13:50:07.527272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.901 [2024-04-18 13:50:07.527287] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.901 [2024-04-18 13:50:07.527300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.901 [2024-04-18 13:50:07.527330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.901 qpair failed and we were unable to recover it. 00:21:04.901 [2024-04-18 13:50:07.537073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.901 [2024-04-18 13:50:07.537219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.901 [2024-04-18 13:50:07.537246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.901 [2024-04-18 13:50:07.537261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.901 [2024-04-18 13:50:07.537274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.901 [2024-04-18 13:50:07.537305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.901 qpair failed and we were unable to recover it. 00:21:04.901 [2024-04-18 13:50:07.547204] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.901 [2024-04-18 13:50:07.547353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.901 [2024-04-18 13:50:07.547383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.901 [2024-04-18 13:50:07.547400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.901 [2024-04-18 13:50:07.547413] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.901 [2024-04-18 13:50:07.547444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.901 qpair failed and we were unable to recover it. 00:21:04.901 [2024-04-18 13:50:07.557170] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.901 [2024-04-18 13:50:07.557292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.901 [2024-04-18 13:50:07.557319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.901 [2024-04-18 13:50:07.557335] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.901 [2024-04-18 13:50:07.557348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.901 [2024-04-18 13:50:07.557377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.901 qpair failed and we were unable to recover it. 00:21:04.901 [2024-04-18 13:50:07.567201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.902 [2024-04-18 13:50:07.567326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.902 [2024-04-18 13:50:07.567352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.902 [2024-04-18 13:50:07.567367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.902 [2024-04-18 13:50:07.567380] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.902 [2024-04-18 13:50:07.567410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.902 qpair failed and we were unable to recover it. 00:21:04.902 [2024-04-18 13:50:07.577227] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.902 [2024-04-18 13:50:07.577402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.902 [2024-04-18 13:50:07.577430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.902 [2024-04-18 13:50:07.577445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.902 [2024-04-18 13:50:07.577458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.902 [2024-04-18 13:50:07.577489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.902 qpair failed and we were unable to recover it. 00:21:04.902 [2024-04-18 13:50:07.587294] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.902 [2024-04-18 13:50:07.587421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.902 [2024-04-18 13:50:07.587448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.902 [2024-04-18 13:50:07.587478] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.902 [2024-04-18 13:50:07.587491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.902 [2024-04-18 13:50:07.587526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.902 qpair failed and we were unable to recover it. 00:21:04.902 [2024-04-18 13:50:07.597259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.902 [2024-04-18 13:50:07.597376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.902 [2024-04-18 13:50:07.597404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.902 [2024-04-18 13:50:07.597419] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.902 [2024-04-18 13:50:07.597432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.902 [2024-04-18 13:50:07.597476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.902 qpair failed and we were unable to recover it. 00:21:04.902 [2024-04-18 13:50:07.607348] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.902 [2024-04-18 13:50:07.607587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.902 [2024-04-18 13:50:07.607614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.902 [2024-04-18 13:50:07.607630] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.902 [2024-04-18 13:50:07.607643] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.902 [2024-04-18 13:50:07.607672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.902 qpair failed and we were unable to recover it. 00:21:04.902 [2024-04-18 13:50:07.617327] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.902 [2024-04-18 13:50:07.617493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.902 [2024-04-18 13:50:07.617520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.902 [2024-04-18 13:50:07.617535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.902 [2024-04-18 13:50:07.617548] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.902 [2024-04-18 13:50:07.617577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.902 qpair failed and we were unable to recover it. 00:21:04.902 [2024-04-18 13:50:07.627344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.902 [2024-04-18 13:50:07.627498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.902 [2024-04-18 13:50:07.627523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.902 [2024-04-18 13:50:07.627538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.902 [2024-04-18 13:50:07.627551] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.902 [2024-04-18 13:50:07.627580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.902 qpair failed and we were unable to recover it. 00:21:04.902 [2024-04-18 13:50:07.637400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.902 [2024-04-18 13:50:07.637528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.902 [2024-04-18 13:50:07.637559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.902 [2024-04-18 13:50:07.637575] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.902 [2024-04-18 13:50:07.637588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.902 [2024-04-18 13:50:07.637616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.902 qpair failed and we were unable to recover it. 00:21:04.902 [2024-04-18 13:50:07.647432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.902 [2024-04-18 13:50:07.647561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.902 [2024-04-18 13:50:07.647586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.902 [2024-04-18 13:50:07.647601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.902 [2024-04-18 13:50:07.647614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.902 [2024-04-18 13:50:07.647643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.902 qpair failed and we were unable to recover it. 00:21:04.902 [2024-04-18 13:50:07.657444] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.902 [2024-04-18 13:50:07.657614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.902 [2024-04-18 13:50:07.657640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.902 [2024-04-18 13:50:07.657655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.902 [2024-04-18 13:50:07.657667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.902 [2024-04-18 13:50:07.657696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.902 qpair failed and we were unable to recover it. 00:21:04.902 [2024-04-18 13:50:07.667511] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.902 [2024-04-18 13:50:07.667627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.902 [2024-04-18 13:50:07.667651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.902 [2024-04-18 13:50:07.667666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.902 [2024-04-18 13:50:07.667679] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.902 [2024-04-18 13:50:07.667708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.902 qpair failed and we were unable to recover it. 00:21:04.902 [2024-04-18 13:50:07.677500] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.902 [2024-04-18 13:50:07.677643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.902 [2024-04-18 13:50:07.677668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.902 [2024-04-18 13:50:07.677682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.902 [2024-04-18 13:50:07.677701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.902 [2024-04-18 13:50:07.677731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.902 qpair failed and we were unable to recover it. 00:21:04.902 [2024-04-18 13:50:07.687545] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.902 [2024-04-18 13:50:07.687706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.902 [2024-04-18 13:50:07.687733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.902 [2024-04-18 13:50:07.687749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.902 [2024-04-18 13:50:07.687762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.902 [2024-04-18 13:50:07.687791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.902 qpair failed and we were unable to recover it. 00:21:04.902 [2024-04-18 13:50:07.697547] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:04.902 [2024-04-18 13:50:07.697687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:04.902 [2024-04-18 13:50:07.697713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:04.903 [2024-04-18 13:50:07.697728] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:04.903 [2024-04-18 13:50:07.697741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:04.903 [2024-04-18 13:50:07.697770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:04.903 qpair failed and we were unable to recover it. 00:21:05.160 [2024-04-18 13:50:07.707617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.160 [2024-04-18 13:50:07.707822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.160 [2024-04-18 13:50:07.707847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.160 [2024-04-18 13:50:07.707862] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.160 [2024-04-18 13:50:07.707875] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.160 [2024-04-18 13:50:07.707904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.160 qpair failed and we were unable to recover it. 00:21:05.160 [2024-04-18 13:50:07.717617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.160 [2024-04-18 13:50:07.717729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.160 [2024-04-18 13:50:07.717754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.160 [2024-04-18 13:50:07.717768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.160 [2024-04-18 13:50:07.717781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.160 [2024-04-18 13:50:07.717810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.160 qpair failed and we were unable to recover it. 00:21:05.160 [2024-04-18 13:50:07.727621] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.160 [2024-04-18 13:50:07.727743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.160 [2024-04-18 13:50:07.727769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.160 [2024-04-18 13:50:07.727783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.160 [2024-04-18 13:50:07.727797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.160 [2024-04-18 13:50:07.727825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.160 qpair failed and we were unable to recover it. 00:21:05.160 [2024-04-18 13:50:07.737660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.160 [2024-04-18 13:50:07.737817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.160 [2024-04-18 13:50:07.737843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.160 [2024-04-18 13:50:07.737858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.160 [2024-04-18 13:50:07.737871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.160 [2024-04-18 13:50:07.737899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.160 qpair failed and we were unable to recover it. 00:21:05.160 [2024-04-18 13:50:07.747750] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.160 [2024-04-18 13:50:07.747917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.160 [2024-04-18 13:50:07.747942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.160 [2024-04-18 13:50:07.747956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.160 [2024-04-18 13:50:07.747969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.160 [2024-04-18 13:50:07.747999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.160 qpair failed and we were unable to recover it. 00:21:05.160 [2024-04-18 13:50:07.757751] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.160 [2024-04-18 13:50:07.757867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.160 [2024-04-18 13:50:07.757891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.160 [2024-04-18 13:50:07.757905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.160 [2024-04-18 13:50:07.757918] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.160 [2024-04-18 13:50:07.757947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.160 qpair failed and we were unable to recover it. 00:21:05.160 [2024-04-18 13:50:07.767800] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.160 [2024-04-18 13:50:07.767936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.160 [2024-04-18 13:50:07.767962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.160 [2024-04-18 13:50:07.767977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.160 [2024-04-18 13:50:07.768005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.160 [2024-04-18 13:50:07.768036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.160 qpair failed and we were unable to recover it. 00:21:05.160 [2024-04-18 13:50:07.777778] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.160 [2024-04-18 13:50:07.777890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.160 [2024-04-18 13:50:07.777915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.160 [2024-04-18 13:50:07.777930] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.160 [2024-04-18 13:50:07.777942] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.160 [2024-04-18 13:50:07.777971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.160 qpair failed and we were unable to recover it. 00:21:05.160 [2024-04-18 13:50:07.787834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.160 [2024-04-18 13:50:07.787955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.160 [2024-04-18 13:50:07.787979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.160 [2024-04-18 13:50:07.787994] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.160 [2024-04-18 13:50:07.788007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.160 [2024-04-18 13:50:07.788036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.160 qpair failed and we were unable to recover it. 00:21:05.160 [2024-04-18 13:50:07.797832] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.160 [2024-04-18 13:50:07.797952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.160 [2024-04-18 13:50:07.797977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.160 [2024-04-18 13:50:07.797991] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.160 [2024-04-18 13:50:07.798003] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.160 [2024-04-18 13:50:07.798032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.160 qpair failed and we were unable to recover it. 00:21:05.160 [2024-04-18 13:50:07.807876] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.160 [2024-04-18 13:50:07.807986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.160 [2024-04-18 13:50:07.808011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.160 [2024-04-18 13:50:07.808025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.160 [2024-04-18 13:50:07.808038] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.160 [2024-04-18 13:50:07.808066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.160 qpair failed and we were unable to recover it. 00:21:05.160 [2024-04-18 13:50:07.817888] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.160 [2024-04-18 13:50:07.817995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.160 [2024-04-18 13:50:07.818019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.160 [2024-04-18 13:50:07.818034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.160 [2024-04-18 13:50:07.818046] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.160 [2024-04-18 13:50:07.818075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.160 qpair failed and we were unable to recover it. 00:21:05.160 [2024-04-18 13:50:07.827958] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.160 [2024-04-18 13:50:07.828073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.160 [2024-04-18 13:50:07.828097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.160 [2024-04-18 13:50:07.828112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.160 [2024-04-18 13:50:07.828125] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.160 [2024-04-18 13:50:07.828168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.160 qpair failed and we were unable to recover it. 00:21:05.160 [2024-04-18 13:50:07.837947] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.160 [2024-04-18 13:50:07.838065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.160 [2024-04-18 13:50:07.838089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.160 [2024-04-18 13:50:07.838103] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.160 [2024-04-18 13:50:07.838116] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.160 [2024-04-18 13:50:07.838144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.160 qpair failed and we were unable to recover it. 00:21:05.160 [2024-04-18 13:50:07.847988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.160 [2024-04-18 13:50:07.848100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.160 [2024-04-18 13:50:07.848124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.160 [2024-04-18 13:50:07.848139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.160 [2024-04-18 13:50:07.848151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.161 [2024-04-18 13:50:07.848204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.161 qpair failed and we were unable to recover it. 00:21:05.161 [2024-04-18 13:50:07.857972] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.161 [2024-04-18 13:50:07.858085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.161 [2024-04-18 13:50:07.858109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.161 [2024-04-18 13:50:07.858129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.161 [2024-04-18 13:50:07.858142] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.161 [2024-04-18 13:50:07.858198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.161 qpair failed and we were unable to recover it. 00:21:05.161 [2024-04-18 13:50:07.868064] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.161 [2024-04-18 13:50:07.868204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.161 [2024-04-18 13:50:07.868230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.161 [2024-04-18 13:50:07.868245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.161 [2024-04-18 13:50:07.868258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.161 [2024-04-18 13:50:07.868289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.161 qpair failed and we were unable to recover it. 00:21:05.161 [2024-04-18 13:50:07.878072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.161 [2024-04-18 13:50:07.878213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.161 [2024-04-18 13:50:07.878239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.161 [2024-04-18 13:50:07.878254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.161 [2024-04-18 13:50:07.878267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.161 [2024-04-18 13:50:07.878298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.161 qpair failed and we were unable to recover it. 00:21:05.161 [2024-04-18 13:50:07.888085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.161 [2024-04-18 13:50:07.888219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.161 [2024-04-18 13:50:07.888243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.161 [2024-04-18 13:50:07.888258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.161 [2024-04-18 13:50:07.888271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.161 [2024-04-18 13:50:07.888302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.161 qpair failed and we were unable to recover it. 00:21:05.161 [2024-04-18 13:50:07.898132] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.161 [2024-04-18 13:50:07.898306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.161 [2024-04-18 13:50:07.898331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.161 [2024-04-18 13:50:07.898346] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.161 [2024-04-18 13:50:07.898358] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.161 [2024-04-18 13:50:07.898390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.161 qpair failed and we were unable to recover it. 00:21:05.161 [2024-04-18 13:50:07.908252] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.161 [2024-04-18 13:50:07.908369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.161 [2024-04-18 13:50:07.908396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.161 [2024-04-18 13:50:07.908411] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.161 [2024-04-18 13:50:07.908424] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.161 [2024-04-18 13:50:07.908460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.161 qpair failed and we were unable to recover it. 00:21:05.161 [2024-04-18 13:50:07.918155] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.161 [2024-04-18 13:50:07.918299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.161 [2024-04-18 13:50:07.918326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.161 [2024-04-18 13:50:07.918343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.161 [2024-04-18 13:50:07.918357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.161 [2024-04-18 13:50:07.918387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.161 qpair failed and we were unable to recover it. 00:21:05.161 [2024-04-18 13:50:07.928190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.161 [2024-04-18 13:50:07.928301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.161 [2024-04-18 13:50:07.928326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.161 [2024-04-18 13:50:07.928340] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.161 [2024-04-18 13:50:07.928354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.161 [2024-04-18 13:50:07.928383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.161 qpair failed and we were unable to recover it. 00:21:05.161 [2024-04-18 13:50:07.938222] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.161 [2024-04-18 13:50:07.938386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.161 [2024-04-18 13:50:07.938413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.161 [2024-04-18 13:50:07.938429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.161 [2024-04-18 13:50:07.938441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.161 [2024-04-18 13:50:07.938486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.161 qpair failed and we were unable to recover it. 00:21:05.161 [2024-04-18 13:50:07.948268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.161 [2024-04-18 13:50:07.948414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.161 [2024-04-18 13:50:07.948445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.161 [2024-04-18 13:50:07.948476] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.161 [2024-04-18 13:50:07.948490] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.161 [2024-04-18 13:50:07.948519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.161 qpair failed and we were unable to recover it. 00:21:05.161 [2024-04-18 13:50:07.958273] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.161 [2024-04-18 13:50:07.958389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.161 [2024-04-18 13:50:07.958416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.161 [2024-04-18 13:50:07.958432] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.161 [2024-04-18 13:50:07.958445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.161 [2024-04-18 13:50:07.958489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.161 qpair failed and we were unable to recover it. 00:21:05.419 [2024-04-18 13:50:07.968309] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.419 [2024-04-18 13:50:07.968430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.419 [2024-04-18 13:50:07.968458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.419 [2024-04-18 13:50:07.968473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.419 [2024-04-18 13:50:07.968486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.419 [2024-04-18 13:50:07.968516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.419 qpair failed and we were unable to recover it. 00:21:05.419 [2024-04-18 13:50:07.978361] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.419 [2024-04-18 13:50:07.978495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.419 [2024-04-18 13:50:07.978520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.419 [2024-04-18 13:50:07.978534] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.419 [2024-04-18 13:50:07.978547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.419 [2024-04-18 13:50:07.978575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.419 qpair failed and we were unable to recover it. 00:21:05.419 [2024-04-18 13:50:07.988380] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.419 [2024-04-18 13:50:07.988507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.419 [2024-04-18 13:50:07.988534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.419 [2024-04-18 13:50:07.988549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.419 [2024-04-18 13:50:07.988561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.419 [2024-04-18 13:50:07.988596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.419 qpair failed and we were unable to recover it. 00:21:05.419 [2024-04-18 13:50:07.998417] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.419 [2024-04-18 13:50:07.998564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.419 [2024-04-18 13:50:07.998590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.419 [2024-04-18 13:50:07.998605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.419 [2024-04-18 13:50:07.998622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.419 [2024-04-18 13:50:07.998651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.419 qpair failed and we were unable to recover it. 00:21:05.419 [2024-04-18 13:50:08.008422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.419 [2024-04-18 13:50:08.008558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.419 [2024-04-18 13:50:08.008583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.419 [2024-04-18 13:50:08.008597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.419 [2024-04-18 13:50:08.008609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.419 [2024-04-18 13:50:08.008638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.419 qpair failed and we were unable to recover it. 00:21:05.419 [2024-04-18 13:50:08.018460] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.419 [2024-04-18 13:50:08.018637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.419 [2024-04-18 13:50:08.018663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.419 [2024-04-18 13:50:08.018678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.419 [2024-04-18 13:50:08.018700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.419 [2024-04-18 13:50:08.018730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.419 qpair failed and we were unable to recover it. 00:21:05.419 [2024-04-18 13:50:08.028565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.419 [2024-04-18 13:50:08.028678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.419 [2024-04-18 13:50:08.028704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.419 [2024-04-18 13:50:08.028719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.419 [2024-04-18 13:50:08.028731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.419 [2024-04-18 13:50:08.028760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.419 qpair failed and we were unable to recover it. 00:21:05.419 [2024-04-18 13:50:08.038554] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.419 [2024-04-18 13:50:08.038680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.419 [2024-04-18 13:50:08.038712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.419 [2024-04-18 13:50:08.038728] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.419 [2024-04-18 13:50:08.038752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.419 [2024-04-18 13:50:08.038781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.419 qpair failed and we were unable to recover it. 00:21:05.419 [2024-04-18 13:50:08.048571] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.419 [2024-04-18 13:50:08.048683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.419 [2024-04-18 13:50:08.048709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.419 [2024-04-18 13:50:08.048724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.420 [2024-04-18 13:50:08.048737] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.420 [2024-04-18 13:50:08.048765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.420 qpair failed and we were unable to recover it. 00:21:05.420 [2024-04-18 13:50:08.058587] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.420 [2024-04-18 13:50:08.058704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.420 [2024-04-18 13:50:08.058728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.420 [2024-04-18 13:50:08.058742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.420 [2024-04-18 13:50:08.058754] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.420 [2024-04-18 13:50:08.058784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.420 qpair failed and we were unable to recover it. 00:21:05.420 [2024-04-18 13:50:08.068686] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.420 [2024-04-18 13:50:08.068804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.420 [2024-04-18 13:50:08.068829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.420 [2024-04-18 13:50:08.068844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.420 [2024-04-18 13:50:08.068856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.420 [2024-04-18 13:50:08.068885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.420 qpair failed and we were unable to recover it. 00:21:05.420 [2024-04-18 13:50:08.078637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.420 [2024-04-18 13:50:08.078754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.420 [2024-04-18 13:50:08.078779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.420 [2024-04-18 13:50:08.078794] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.420 [2024-04-18 13:50:08.078806] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.420 [2024-04-18 13:50:08.078840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.420 qpair failed and we were unable to recover it. 00:21:05.420 [2024-04-18 13:50:08.088698] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.420 [2024-04-18 13:50:08.088817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.420 [2024-04-18 13:50:08.088853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.420 [2024-04-18 13:50:08.088868] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.420 [2024-04-18 13:50:08.088880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.420 [2024-04-18 13:50:08.088909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.420 qpair failed and we were unable to recover it. 00:21:05.420 [2024-04-18 13:50:08.098720] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.420 [2024-04-18 13:50:08.098830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.420 [2024-04-18 13:50:08.098853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.420 [2024-04-18 13:50:08.098867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.420 [2024-04-18 13:50:08.098880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.420 [2024-04-18 13:50:08.098909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.420 qpair failed and we were unable to recover it. 00:21:05.420 [2024-04-18 13:50:08.108761] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.420 [2024-04-18 13:50:08.108876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.420 [2024-04-18 13:50:08.108901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.420 [2024-04-18 13:50:08.108916] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.420 [2024-04-18 13:50:08.108928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.420 [2024-04-18 13:50:08.108957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.420 qpair failed and we were unable to recover it. 00:21:05.420 [2024-04-18 13:50:08.118754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.420 [2024-04-18 13:50:08.118901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.420 [2024-04-18 13:50:08.118926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.420 [2024-04-18 13:50:08.118941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.420 [2024-04-18 13:50:08.118953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.420 [2024-04-18 13:50:08.118991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.420 qpair failed and we were unable to recover it. 00:21:05.420 [2024-04-18 13:50:08.128868] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.420 [2024-04-18 13:50:08.129038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.420 [2024-04-18 13:50:08.129064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.420 [2024-04-18 13:50:08.129087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.420 [2024-04-18 13:50:08.129100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.420 [2024-04-18 13:50:08.129128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.420 qpair failed and we were unable to recover it. 00:21:05.420 [2024-04-18 13:50:08.138814] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.420 [2024-04-18 13:50:08.138930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.420 [2024-04-18 13:50:08.138955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.420 [2024-04-18 13:50:08.138969] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.420 [2024-04-18 13:50:08.138981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.420 [2024-04-18 13:50:08.139010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.420 qpair failed and we were unable to recover it. 00:21:05.420 [2024-04-18 13:50:08.148853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.420 [2024-04-18 13:50:08.148976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.420 [2024-04-18 13:50:08.149002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.420 [2024-04-18 13:50:08.149017] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.420 [2024-04-18 13:50:08.149029] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.420 [2024-04-18 13:50:08.149058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.420 qpair failed and we were unable to recover it. 00:21:05.420 [2024-04-18 13:50:08.158863] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.420 [2024-04-18 13:50:08.158976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.420 [2024-04-18 13:50:08.158999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.420 [2024-04-18 13:50:08.159013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.420 [2024-04-18 13:50:08.159025] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.420 [2024-04-18 13:50:08.159053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.420 qpair failed and we were unable to recover it. 00:21:05.420 [2024-04-18 13:50:08.168906] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.420 [2024-04-18 13:50:08.169026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.420 [2024-04-18 13:50:08.169051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.420 [2024-04-18 13:50:08.169066] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.420 [2024-04-18 13:50:08.169085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.420 [2024-04-18 13:50:08.169115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.420 qpair failed and we were unable to recover it. 00:21:05.420 [2024-04-18 13:50:08.178933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.420 [2024-04-18 13:50:08.179066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.420 [2024-04-18 13:50:08.179091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.420 [2024-04-18 13:50:08.179106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.420 [2024-04-18 13:50:08.179119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.421 [2024-04-18 13:50:08.179159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.421 qpair failed and we were unable to recover it. 00:21:05.421 [2024-04-18 13:50:08.188987] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.421 [2024-04-18 13:50:08.189104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.421 [2024-04-18 13:50:08.189130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.421 [2024-04-18 13:50:08.189146] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.421 [2024-04-18 13:50:08.189172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.421 [2024-04-18 13:50:08.189219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.421 qpair failed and we were unable to recover it. 00:21:05.421 [2024-04-18 13:50:08.198971] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.421 [2024-04-18 13:50:08.199135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.421 [2024-04-18 13:50:08.199193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.421 [2024-04-18 13:50:08.199211] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.421 [2024-04-18 13:50:08.199224] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.421 [2024-04-18 13:50:08.199265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.421 qpair failed and we were unable to recover it. 00:21:05.421 [2024-04-18 13:50:08.209032] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.421 [2024-04-18 13:50:08.209148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.421 [2024-04-18 13:50:08.209196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.421 [2024-04-18 13:50:08.209213] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.421 [2024-04-18 13:50:08.209225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6450000b90 00:21:05.421 [2024-04-18 13:50:08.209267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.421 qpair failed and we were unable to recover it. 00:21:05.421 [2024-04-18 13:50:08.219041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.421 [2024-04-18 13:50:08.219182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.421 [2024-04-18 13:50:08.219216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.421 [2024-04-18 13:50:08.219233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.421 [2024-04-18 13:50:08.219247] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6458000b90 00:21:05.421 [2024-04-18 13:50:08.219287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:05.421 qpair failed and we were unable to recover it. 00:21:05.678 [2024-04-18 13:50:08.229209] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.679 [2024-04-18 13:50:08.229338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.679 [2024-04-18 13:50:08.229371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.679 [2024-04-18 13:50:08.229387] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.679 [2024-04-18 13:50:08.229400] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.679 [2024-04-18 13:50:08.229433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.679 qpair failed and we were unable to recover it. 00:21:05.679 [2024-04-18 13:50:08.239151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.679 [2024-04-18 13:50:08.239297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.679 [2024-04-18 13:50:08.239326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.679 [2024-04-18 13:50:08.239342] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.679 [2024-04-18 13:50:08.239354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.679 [2024-04-18 13:50:08.239392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.679 qpair failed and we were unable to recover it. 00:21:05.679 [2024-04-18 13:50:08.249224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.679 [2024-04-18 13:50:08.249370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.679 [2024-04-18 13:50:08.249398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.679 [2024-04-18 13:50:08.249413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.679 [2024-04-18 13:50:08.249426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.679 [2024-04-18 13:50:08.249480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.679 qpair failed and we were unable to recover it. 00:21:05.679 [2024-04-18 13:50:08.259183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.679 [2024-04-18 13:50:08.259318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.679 [2024-04-18 13:50:08.259345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.679 [2024-04-18 13:50:08.259365] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.679 [2024-04-18 13:50:08.259379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.679 [2024-04-18 13:50:08.259417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.679 qpair failed and we were unable to recover it. 00:21:05.679 [2024-04-18 13:50:08.269280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.679 [2024-04-18 13:50:08.269449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.679 [2024-04-18 13:50:08.269490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.679 [2024-04-18 13:50:08.269505] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.679 [2024-04-18 13:50:08.269518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.679 [2024-04-18 13:50:08.269548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.679 qpair failed and we were unable to recover it. 00:21:05.679 [2024-04-18 13:50:08.279233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.679 [2024-04-18 13:50:08.279350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.679 [2024-04-18 13:50:08.279377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.679 [2024-04-18 13:50:08.279392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.679 [2024-04-18 13:50:08.279405] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.679 [2024-04-18 13:50:08.279442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.679 qpair failed and we were unable to recover it. 00:21:05.679 [2024-04-18 13:50:08.289250] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.679 [2024-04-18 13:50:08.289363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.679 [2024-04-18 13:50:08.289390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.679 [2024-04-18 13:50:08.289406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.679 [2024-04-18 13:50:08.289418] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.679 [2024-04-18 13:50:08.289448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.679 qpair failed and we were unable to recover it. 00:21:05.679 [2024-04-18 13:50:08.299341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.679 [2024-04-18 13:50:08.299475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.679 [2024-04-18 13:50:08.299517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.679 [2024-04-18 13:50:08.299532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.679 [2024-04-18 13:50:08.299544] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.679 [2024-04-18 13:50:08.299584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.679 qpair failed and we were unable to recover it. 00:21:05.679 [2024-04-18 13:50:08.309406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.679 [2024-04-18 13:50:08.309543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.679 [2024-04-18 13:50:08.309568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.679 [2024-04-18 13:50:08.309583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.679 [2024-04-18 13:50:08.309596] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.679 [2024-04-18 13:50:08.309626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.679 qpair failed and we were unable to recover it. 00:21:05.679 [2024-04-18 13:50:08.319420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.679 [2024-04-18 13:50:08.319539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.679 [2024-04-18 13:50:08.319566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.679 [2024-04-18 13:50:08.319582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.679 [2024-04-18 13:50:08.319594] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.679 [2024-04-18 13:50:08.319625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.679 qpair failed and we were unable to recover it. 00:21:05.679 [2024-04-18 13:50:08.329380] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.679 [2024-04-18 13:50:08.329509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.679 [2024-04-18 13:50:08.329534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.679 [2024-04-18 13:50:08.329549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.679 [2024-04-18 13:50:08.329561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.679 [2024-04-18 13:50:08.329591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.679 qpair failed and we were unable to recover it. 00:21:05.679 [2024-04-18 13:50:08.339402] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.679 [2024-04-18 13:50:08.339522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.679 [2024-04-18 13:50:08.339548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.679 [2024-04-18 13:50:08.339563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.679 [2024-04-18 13:50:08.339575] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.679 [2024-04-18 13:50:08.339604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.679 qpair failed and we were unable to recover it. 00:21:05.679 [2024-04-18 13:50:08.349461] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.679 [2024-04-18 13:50:08.349592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.679 [2024-04-18 13:50:08.349622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.679 [2024-04-18 13:50:08.349638] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.679 [2024-04-18 13:50:08.349650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.679 [2024-04-18 13:50:08.349680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.679 qpair failed and we were unable to recover it. 00:21:05.679 [2024-04-18 13:50:08.359532] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.679 [2024-04-18 13:50:08.359651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.680 [2024-04-18 13:50:08.359677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.680 [2024-04-18 13:50:08.359691] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.680 [2024-04-18 13:50:08.359704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.680 [2024-04-18 13:50:08.359745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.680 qpair failed and we were unable to recover it. 00:21:05.680 [2024-04-18 13:50:08.369550] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.680 [2024-04-18 13:50:08.369660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.680 [2024-04-18 13:50:08.369686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.680 [2024-04-18 13:50:08.369700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.680 [2024-04-18 13:50:08.369713] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.680 [2024-04-18 13:50:08.369742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.680 qpair failed and we were unable to recover it. 00:21:05.680 [2024-04-18 13:50:08.379512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.680 [2024-04-18 13:50:08.379618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.680 [2024-04-18 13:50:08.379644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.680 [2024-04-18 13:50:08.379659] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.680 [2024-04-18 13:50:08.379671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.680 [2024-04-18 13:50:08.379701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.680 qpair failed and we were unable to recover it. 00:21:05.680 [2024-04-18 13:50:08.389555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.680 [2024-04-18 13:50:08.389672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.680 [2024-04-18 13:50:08.389697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.680 [2024-04-18 13:50:08.389712] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.680 [2024-04-18 13:50:08.389725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.680 [2024-04-18 13:50:08.389759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.680 qpair failed and we were unable to recover it. 00:21:05.680 [2024-04-18 13:50:08.399536] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.680 [2024-04-18 13:50:08.399647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.680 [2024-04-18 13:50:08.399673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.680 [2024-04-18 13:50:08.399688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.680 [2024-04-18 13:50:08.399700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.680 [2024-04-18 13:50:08.399729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.680 qpair failed and we were unable to recover it. 00:21:05.680 [2024-04-18 13:50:08.409573] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.680 [2024-04-18 13:50:08.409688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.680 [2024-04-18 13:50:08.409714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.680 [2024-04-18 13:50:08.409728] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.680 [2024-04-18 13:50:08.409740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.680 [2024-04-18 13:50:08.409769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.680 qpair failed and we were unable to recover it. 00:21:05.680 [2024-04-18 13:50:08.419701] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.680 [2024-04-18 13:50:08.419825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.680 [2024-04-18 13:50:08.419852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.680 [2024-04-18 13:50:08.419867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.680 [2024-04-18 13:50:08.419879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.680 [2024-04-18 13:50:08.419908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.680 qpair failed and we were unable to recover it. 00:21:05.680 [2024-04-18 13:50:08.429685] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.680 [2024-04-18 13:50:08.429818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.680 [2024-04-18 13:50:08.429844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.680 [2024-04-18 13:50:08.429859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.680 [2024-04-18 13:50:08.429871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.680 [2024-04-18 13:50:08.429913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.680 qpair failed and we were unable to recover it. 00:21:05.680 [2024-04-18 13:50:08.439659] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.680 [2024-04-18 13:50:08.439769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.680 [2024-04-18 13:50:08.439799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.680 [2024-04-18 13:50:08.439815] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.680 [2024-04-18 13:50:08.439827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.680 [2024-04-18 13:50:08.439857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.680 qpair failed and we were unable to recover it. 00:21:05.680 [2024-04-18 13:50:08.449723] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.680 [2024-04-18 13:50:08.449831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.680 [2024-04-18 13:50:08.449857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.680 [2024-04-18 13:50:08.449871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.680 [2024-04-18 13:50:08.449883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.680 [2024-04-18 13:50:08.449913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.680 qpair failed and we were unable to recover it. 00:21:05.680 [2024-04-18 13:50:08.459719] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.680 [2024-04-18 13:50:08.459824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.680 [2024-04-18 13:50:08.459850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.680 [2024-04-18 13:50:08.459865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.680 [2024-04-18 13:50:08.459877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.680 [2024-04-18 13:50:08.459907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.680 qpair failed and we were unable to recover it. 00:21:05.680 [2024-04-18 13:50:08.469784] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.680 [2024-04-18 13:50:08.469940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.680 [2024-04-18 13:50:08.469966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.680 [2024-04-18 13:50:08.469981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.680 [2024-04-18 13:50:08.469993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.680 [2024-04-18 13:50:08.470029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.680 qpair failed and we were unable to recover it. 00:21:05.680 [2024-04-18 13:50:08.479802] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.680 [2024-04-18 13:50:08.479922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.680 [2024-04-18 13:50:08.479949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.680 [2024-04-18 13:50:08.479963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.680 [2024-04-18 13:50:08.479976] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.680 [2024-04-18 13:50:08.480010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.680 qpair failed and we were unable to recover it. 00:21:05.938 [2024-04-18 13:50:08.489858] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.938 [2024-04-18 13:50:08.489982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.938 [2024-04-18 13:50:08.490009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.938 [2024-04-18 13:50:08.490024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.938 [2024-04-18 13:50:08.490037] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.938 [2024-04-18 13:50:08.490067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.938 qpair failed and we were unable to recover it. 00:21:05.938 [2024-04-18 13:50:08.499812] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.938 [2024-04-18 13:50:08.499919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.938 [2024-04-18 13:50:08.499944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.938 [2024-04-18 13:50:08.499959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.938 [2024-04-18 13:50:08.499971] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.938 [2024-04-18 13:50:08.500000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.938 qpair failed and we were unable to recover it. 00:21:05.938 [2024-04-18 13:50:08.509888] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.938 [2024-04-18 13:50:08.510009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.938 [2024-04-18 13:50:08.510034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.938 [2024-04-18 13:50:08.510048] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.938 [2024-04-18 13:50:08.510061] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.938 [2024-04-18 13:50:08.510090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.938 qpair failed and we were unable to recover it. 00:21:05.938 [2024-04-18 13:50:08.519896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.938 [2024-04-18 13:50:08.520012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.938 [2024-04-18 13:50:08.520039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.938 [2024-04-18 13:50:08.520054] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.938 [2024-04-18 13:50:08.520066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.938 [2024-04-18 13:50:08.520095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.938 qpair failed and we were unable to recover it. 00:21:05.938 [2024-04-18 13:50:08.529882] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.938 [2024-04-18 13:50:08.529995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.938 [2024-04-18 13:50:08.530026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.938 [2024-04-18 13:50:08.530042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.938 [2024-04-18 13:50:08.530055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.938 [2024-04-18 13:50:08.530084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.938 qpair failed and we were unable to recover it. 00:21:05.938 [2024-04-18 13:50:08.539930] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.938 [2024-04-18 13:50:08.540051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.938 [2024-04-18 13:50:08.540077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.938 [2024-04-18 13:50:08.540091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.938 [2024-04-18 13:50:08.540103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.938 [2024-04-18 13:50:08.540133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.938 qpair failed and we were unable to recover it. 00:21:05.938 [2024-04-18 13:50:08.549964] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.938 [2024-04-18 13:50:08.550126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.938 [2024-04-18 13:50:08.550174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.938 [2024-04-18 13:50:08.550205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.938 [2024-04-18 13:50:08.550219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.938 [2024-04-18 13:50:08.550250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.938 qpair failed and we were unable to recover it. 00:21:05.938 [2024-04-18 13:50:08.559991] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.938 [2024-04-18 13:50:08.560106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.938 [2024-04-18 13:50:08.560132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.938 [2024-04-18 13:50:08.560146] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.938 [2024-04-18 13:50:08.560174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.938 [2024-04-18 13:50:08.560214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.938 qpair failed and we were unable to recover it. 00:21:05.938 [2024-04-18 13:50:08.569994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.570110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.570135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.570150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.570192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.570226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.580053] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.580191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.580219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.580234] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.580246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.580277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.590086] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.590236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.590262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.590277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.590289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.590320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.600194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.600323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.600349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.600364] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.600377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.600408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.610144] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.610304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.610332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.610347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.610360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.610390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.620218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.620356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.620383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.620399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.620411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.620442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.630245] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.630361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.630387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.630403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.630415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.630446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.640264] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.640383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.640410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.640424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.640437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.640482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.650378] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.650509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.650534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.650549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.650561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.650590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.660360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.660489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.660514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.660534] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.660547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.660577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.670361] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.670509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.670534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.670549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.670561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.670591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.680410] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.680556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.680583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.680599] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.680612] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.680645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.690384] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.690514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.690539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.690554] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.690566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.690595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.700423] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.700578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.700604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.700619] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.700631] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.700670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.710448] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.710577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.710602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.710617] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.710629] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.710658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.720488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.720611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.720637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.720652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.720665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.720705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.730518] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.730628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.730654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.730669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.730681] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.730711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:05.939 [2024-04-18 13:50:08.740547] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:05.939 [2024-04-18 13:50:08.740711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:05.939 [2024-04-18 13:50:08.740737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:05.939 [2024-04-18 13:50:08.740752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:05.939 [2024-04-18 13:50:08.740765] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:05.939 [2024-04-18 13:50:08.740794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.939 qpair failed and we were unable to recover it. 00:21:06.197 [2024-04-18 13:50:08.750594] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.198 [2024-04-18 13:50:08.750713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.198 [2024-04-18 13:50:08.750738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.198 [2024-04-18 13:50:08.750758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.198 [2024-04-18 13:50:08.750771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.198 [2024-04-18 13:50:08.750800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.198 qpair failed and we were unable to recover it. 00:21:06.198 [2024-04-18 13:50:08.760629] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.198 [2024-04-18 13:50:08.760737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.198 [2024-04-18 13:50:08.760762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.198 [2024-04-18 13:50:08.760777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.198 [2024-04-18 13:50:08.760790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.198 [2024-04-18 13:50:08.760819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.198 qpair failed and we were unable to recover it. 00:21:06.198 [2024-04-18 13:50:08.770629] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.198 [2024-04-18 13:50:08.770747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.198 [2024-04-18 13:50:08.770772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.198 [2024-04-18 13:50:08.770786] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.198 [2024-04-18 13:50:08.770798] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.198 [2024-04-18 13:50:08.770828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.198 qpair failed and we were unable to recover it. 00:21:06.198 [2024-04-18 13:50:08.780679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.198 [2024-04-18 13:50:08.780804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.198 [2024-04-18 13:50:08.780830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.198 [2024-04-18 13:50:08.780845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.198 [2024-04-18 13:50:08.780857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.198 [2024-04-18 13:50:08.780886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.198 qpair failed and we were unable to recover it. 00:21:06.198 [2024-04-18 13:50:08.790705] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.198 [2024-04-18 13:50:08.790822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.198 [2024-04-18 13:50:08.790847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.198 [2024-04-18 13:50:08.790862] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.198 [2024-04-18 13:50:08.790875] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.198 [2024-04-18 13:50:08.790904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.198 qpair failed and we were unable to recover it. 00:21:06.198 [2024-04-18 13:50:08.800735] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.198 [2024-04-18 13:50:08.800861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.198 [2024-04-18 13:50:08.800888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.198 [2024-04-18 13:50:08.800902] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.198 [2024-04-18 13:50:08.800914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.198 [2024-04-18 13:50:08.800944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.198 qpair failed and we were unable to recover it. 00:21:06.198 [2024-04-18 13:50:08.810779] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.198 [2024-04-18 13:50:08.810930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.198 [2024-04-18 13:50:08.810956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.198 [2024-04-18 13:50:08.810971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.198 [2024-04-18 13:50:08.810983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.198 [2024-04-18 13:50:08.811013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.198 qpair failed and we were unable to recover it. 00:21:06.198 [2024-04-18 13:50:08.820721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.198 [2024-04-18 13:50:08.820838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.198 [2024-04-18 13:50:08.820864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.198 [2024-04-18 13:50:08.820878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.198 [2024-04-18 13:50:08.820891] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.198 [2024-04-18 13:50:08.820920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.198 qpair failed and we were unable to recover it. 00:21:06.198 [2024-04-18 13:50:08.830841] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.198 [2024-04-18 13:50:08.830958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.198 [2024-04-18 13:50:08.830982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.198 [2024-04-18 13:50:08.830997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.198 [2024-04-18 13:50:08.831009] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.198 [2024-04-18 13:50:08.831038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.198 qpair failed and we were unable to recover it. 00:21:06.198 [2024-04-18 13:50:08.840804] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.198 [2024-04-18 13:50:08.840920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.198 [2024-04-18 13:50:08.840950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.198 [2024-04-18 13:50:08.840966] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.198 [2024-04-18 13:50:08.840978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.198 [2024-04-18 13:50:08.841007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.198 qpair failed and we were unable to recover it. 00:21:06.198 [2024-04-18 13:50:08.850861] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.198 [2024-04-18 13:50:08.850981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.198 [2024-04-18 13:50:08.851006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.198 [2024-04-18 13:50:08.851020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.198 [2024-04-18 13:50:08.851033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.198 [2024-04-18 13:50:08.851063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.198 qpair failed and we were unable to recover it. 00:21:06.198 [2024-04-18 13:50:08.860831] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.198 [2024-04-18 13:50:08.860944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.198 [2024-04-18 13:50:08.860969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.199 [2024-04-18 13:50:08.860984] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.199 [2024-04-18 13:50:08.860996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.199 [2024-04-18 13:50:08.861026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.199 qpair failed and we were unable to recover it. 00:21:06.199 [2024-04-18 13:50:08.870950] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.199 [2024-04-18 13:50:08.871072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.199 [2024-04-18 13:50:08.871097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.199 [2024-04-18 13:50:08.871112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.199 [2024-04-18 13:50:08.871124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.199 [2024-04-18 13:50:08.871153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.199 qpair failed and we were unable to recover it. 00:21:06.199 [2024-04-18 13:50:08.880982] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.199 [2024-04-18 13:50:08.881103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.199 [2024-04-18 13:50:08.881128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.199 [2024-04-18 13:50:08.881142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.199 [2024-04-18 13:50:08.881170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.199 [2024-04-18 13:50:08.881215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.199 qpair failed and we were unable to recover it. 00:21:06.199 [2024-04-18 13:50:08.890964] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.199 [2024-04-18 13:50:08.891078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.199 [2024-04-18 13:50:08.891102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.199 [2024-04-18 13:50:08.891117] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.199 [2024-04-18 13:50:08.891129] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.199 [2024-04-18 13:50:08.891173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.199 qpair failed and we were unable to recover it. 00:21:06.199 [2024-04-18 13:50:08.900997] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.199 [2024-04-18 13:50:08.901198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.199 [2024-04-18 13:50:08.901226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.199 [2024-04-18 13:50:08.901243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.199 [2024-04-18 13:50:08.901256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.199 [2024-04-18 13:50:08.901288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.199 qpair failed and we were unable to recover it. 00:21:06.199 [2024-04-18 13:50:08.911032] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.199 [2024-04-18 13:50:08.911149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.199 [2024-04-18 13:50:08.911174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.199 [2024-04-18 13:50:08.911212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.199 [2024-04-18 13:50:08.911225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.199 [2024-04-18 13:50:08.911256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.199 qpair failed and we were unable to recover it. 00:21:06.199 [2024-04-18 13:50:08.921044] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.199 [2024-04-18 13:50:08.921174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.199 [2024-04-18 13:50:08.921206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.199 [2024-04-18 13:50:08.921221] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.199 [2024-04-18 13:50:08.921235] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.199 [2024-04-18 13:50:08.921265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.199 qpair failed and we were unable to recover it. 00:21:06.199 [2024-04-18 13:50:08.931146] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.199 [2024-04-18 13:50:08.931290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.199 [2024-04-18 13:50:08.931323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.199 [2024-04-18 13:50:08.931340] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.199 [2024-04-18 13:50:08.931353] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.199 [2024-04-18 13:50:08.931385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.199 qpair failed and we were unable to recover it. 00:21:06.199 [2024-04-18 13:50:08.941140] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.199 [2024-04-18 13:50:08.941338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.199 [2024-04-18 13:50:08.941365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.199 [2024-04-18 13:50:08.941381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.199 [2024-04-18 13:50:08.941394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.199 [2024-04-18 13:50:08.941425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.199 qpair failed and we were unable to recover it. 00:21:06.199 [2024-04-18 13:50:08.951122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.199 [2024-04-18 13:50:08.951274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.199 [2024-04-18 13:50:08.951301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.199 [2024-04-18 13:50:08.951317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.199 [2024-04-18 13:50:08.951331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.199 [2024-04-18 13:50:08.951361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.199 qpair failed and we were unable to recover it. 00:21:06.199 [2024-04-18 13:50:08.961154] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.199 [2024-04-18 13:50:08.961315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.199 [2024-04-18 13:50:08.961340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.199 [2024-04-18 13:50:08.961355] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.199 [2024-04-18 13:50:08.961368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.199 [2024-04-18 13:50:08.961399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.199 qpair failed and we were unable to recover it. 00:21:06.199 [2024-04-18 13:50:08.971208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.199 [2024-04-18 13:50:08.971328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.200 [2024-04-18 13:50:08.971354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.200 [2024-04-18 13:50:08.971370] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.200 [2024-04-18 13:50:08.971387] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.200 [2024-04-18 13:50:08.971419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.200 qpair failed and we were unable to recover it. 00:21:06.200 [2024-04-18 13:50:08.981225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.200 [2024-04-18 13:50:08.981338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.200 [2024-04-18 13:50:08.981365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.200 [2024-04-18 13:50:08.981381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.200 [2024-04-18 13:50:08.981394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.200 [2024-04-18 13:50:08.981424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.200 qpair failed and we were unable to recover it. 00:21:06.200 [2024-04-18 13:50:08.991296] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.200 [2024-04-18 13:50:08.991458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.200 [2024-04-18 13:50:08.991497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.200 [2024-04-18 13:50:08.991513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.200 [2024-04-18 13:50:08.991526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.200 [2024-04-18 13:50:08.991556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.200 qpair failed and we were unable to recover it. 00:21:06.200 [2024-04-18 13:50:09.001333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.200 [2024-04-18 13:50:09.001495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.200 [2024-04-18 13:50:09.001522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.200 [2024-04-18 13:50:09.001538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.200 [2024-04-18 13:50:09.001551] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.200 [2024-04-18 13:50:09.001582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.200 qpair failed and we were unable to recover it. 00:21:06.458 [2024-04-18 13:50:09.011269] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.458 [2024-04-18 13:50:09.011391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.458 [2024-04-18 13:50:09.011416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.458 [2024-04-18 13:50:09.011431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.458 [2024-04-18 13:50:09.011444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.458 [2024-04-18 13:50:09.011487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.458 qpair failed and we were unable to recover it. 00:21:06.458 [2024-04-18 13:50:09.021308] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.458 [2024-04-18 13:50:09.021429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.458 [2024-04-18 13:50:09.021470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.458 [2024-04-18 13:50:09.021486] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.458 [2024-04-18 13:50:09.021498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.458 [2024-04-18 13:50:09.021529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.458 qpair failed and we were unable to recover it. 00:21:06.458 [2024-04-18 13:50:09.031366] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.458 [2024-04-18 13:50:09.031531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.458 [2024-04-18 13:50:09.031558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.458 [2024-04-18 13:50:09.031573] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.458 [2024-04-18 13:50:09.031586] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.458 [2024-04-18 13:50:09.031616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.458 qpair failed and we were unable to recover it. 00:21:06.458 [2024-04-18 13:50:09.041380] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.458 [2024-04-18 13:50:09.041509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.458 [2024-04-18 13:50:09.041534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.458 [2024-04-18 13:50:09.041548] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.458 [2024-04-18 13:50:09.041561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.458 [2024-04-18 13:50:09.041596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.458 qpair failed and we were unable to recover it. 00:21:06.458 [2024-04-18 13:50:09.051432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.458 [2024-04-18 13:50:09.051565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.458 [2024-04-18 13:50:09.051591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.458 [2024-04-18 13:50:09.051606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.459 [2024-04-18 13:50:09.051619] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.459 [2024-04-18 13:50:09.051648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.459 qpair failed and we were unable to recover it. 00:21:06.459 [2024-04-18 13:50:09.061463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.459 [2024-04-18 13:50:09.061589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.459 [2024-04-18 13:50:09.061614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.459 [2024-04-18 13:50:09.061634] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.459 [2024-04-18 13:50:09.061647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.459 [2024-04-18 13:50:09.061677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.459 qpair failed and we were unable to recover it. 00:21:06.459 [2024-04-18 13:50:09.071525] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.459 [2024-04-18 13:50:09.071655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.459 [2024-04-18 13:50:09.071680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.459 [2024-04-18 13:50:09.071694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.459 [2024-04-18 13:50:09.071706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.459 [2024-04-18 13:50:09.071736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.459 qpair failed and we were unable to recover it. 00:21:06.459 [2024-04-18 13:50:09.081569] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.459 [2024-04-18 13:50:09.081745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.459 [2024-04-18 13:50:09.081778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.459 [2024-04-18 13:50:09.081793] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.459 [2024-04-18 13:50:09.081806] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.459 [2024-04-18 13:50:09.081836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.459 qpair failed and we were unable to recover it. 00:21:06.459 [2024-04-18 13:50:09.091535] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.459 [2024-04-18 13:50:09.091666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.459 [2024-04-18 13:50:09.091692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.459 [2024-04-18 13:50:09.091707] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.459 [2024-04-18 13:50:09.091719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.459 [2024-04-18 13:50:09.091748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.459 qpair failed and we were unable to recover it. 00:21:06.459 [2024-04-18 13:50:09.101534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.459 [2024-04-18 13:50:09.101654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.459 [2024-04-18 13:50:09.101680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.459 [2024-04-18 13:50:09.101695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.459 [2024-04-18 13:50:09.101707] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.459 [2024-04-18 13:50:09.101737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.459 qpair failed and we were unable to recover it. 00:21:06.459 [2024-04-18 13:50:09.111569] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.459 [2024-04-18 13:50:09.111687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.459 [2024-04-18 13:50:09.111714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.459 [2024-04-18 13:50:09.111729] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.459 [2024-04-18 13:50:09.111742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.459 [2024-04-18 13:50:09.111771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.459 qpair failed and we were unable to recover it. 00:21:06.459 [2024-04-18 13:50:09.121605] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.459 [2024-04-18 13:50:09.121723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.459 [2024-04-18 13:50:09.121749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.459 [2024-04-18 13:50:09.121764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.459 [2024-04-18 13:50:09.121777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.459 [2024-04-18 13:50:09.121806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.459 qpair failed and we were unable to recover it. 00:21:06.459 [2024-04-18 13:50:09.131647] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.459 [2024-04-18 13:50:09.131780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.459 [2024-04-18 13:50:09.131805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.459 [2024-04-18 13:50:09.131820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.459 [2024-04-18 13:50:09.131832] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.459 [2024-04-18 13:50:09.131862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.459 qpair failed and we were unable to recover it. 00:21:06.459 [2024-04-18 13:50:09.141693] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.460 [2024-04-18 13:50:09.141826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.460 [2024-04-18 13:50:09.141850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.460 [2024-04-18 13:50:09.141864] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.460 [2024-04-18 13:50:09.141877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6448000b90 00:21:06.460 [2024-04-18 13:50:09.141905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.460 qpair failed and we were unable to recover it. 00:21:06.460 [2024-04-18 13:50:09.151728] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.460 [2024-04-18 13:50:09.151866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.460 [2024-04-18 13:50:09.151897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.460 [2024-04-18 13:50:09.151919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.460 [2024-04-18 13:50:09.151932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x185fb30 00:21:06.460 [2024-04-18 13:50:09.151960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.460 qpair failed and we were unable to recover it. 00:21:06.460 [2024-04-18 13:50:09.161828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.460 [2024-04-18 13:50:09.161943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.460 [2024-04-18 13:50:09.161970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.460 [2024-04-18 13:50:09.161985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.460 [2024-04-18 13:50:09.161997] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x185fb30 00:21:06.460 [2024-04-18 13:50:09.162026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.460 qpair failed and we were unable to recover it. 00:21:06.460 [2024-04-18 13:50:09.171792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.460 [2024-04-18 13:50:09.171948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.460 [2024-04-18 13:50:09.171981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.460 [2024-04-18 13:50:09.171997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.460 [2024-04-18 13:50:09.172011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6458000b90 00:21:06.460 [2024-04-18 13:50:09.172043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:06.460 qpair failed and we were unable to recover it. 00:21:06.460 [2024-04-18 13:50:09.181765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:06.460 [2024-04-18 13:50:09.181889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:06.460 [2024-04-18 13:50:09.181915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:06.460 [2024-04-18 13:50:09.181930] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:06.460 [2024-04-18 13:50:09.181943] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6458000b90 00:21:06.460 [2024-04-18 13:50:09.181974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:06.460 qpair failed and we were unable to recover it. 00:21:06.460 Controller properly reset. 00:21:06.460 Initializing NVMe Controllers 00:21:06.460 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:06.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:06.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:21:06.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:21:06.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:21:06.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:21:06.460 Initialization complete. Launching workers. 00:21:06.460 Starting thread on core 1 00:21:06.460 Starting thread on core 2 00:21:06.460 Starting thread on core 3 00:21:06.460 Starting thread on core 0 00:21:06.460 13:50:09 -- host/target_disconnect.sh@59 -- # sync 00:21:06.460 00:21:06.460 real 0m11.387s 00:21:06.460 user 0m21.101s 00:21:06.460 sys 0m5.189s 00:21:06.460 13:50:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:06.460 13:50:09 -- common/autotest_common.sh@10 -- # set +x 00:21:06.460 ************************************ 00:21:06.460 END TEST nvmf_target_disconnect_tc2 00:21:06.460 ************************************ 00:21:06.460 13:50:09 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:21:06.460 13:50:09 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:06.460 13:50:09 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:21:06.460 13:50:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:06.460 13:50:09 -- nvmf/common.sh@117 -- # sync 00:21:06.460 13:50:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:06.460 13:50:09 -- nvmf/common.sh@120 -- # set +e 00:21:06.460 13:50:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:06.460 13:50:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:06.460 rmmod nvme_tcp 00:21:06.718 rmmod nvme_fabrics 00:21:06.718 rmmod nvme_keyring 00:21:06.718 13:50:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:06.718 13:50:09 -- nvmf/common.sh@124 -- # set -e 00:21:06.718 13:50:09 -- nvmf/common.sh@125 -- # return 0 00:21:06.718 13:50:09 -- nvmf/common.sh@478 -- # '[' -n 2683820 ']' 00:21:06.718 13:50:09 -- nvmf/common.sh@479 -- # killprocess 2683820 00:21:06.718 13:50:09 -- common/autotest_common.sh@936 -- # '[' -z 2683820 ']' 00:21:06.718 13:50:09 -- common/autotest_common.sh@940 -- # kill -0 2683820 00:21:06.718 13:50:09 -- common/autotest_common.sh@941 -- # uname 00:21:06.718 13:50:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:06.718 13:50:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2683820 00:21:06.718 13:50:09 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:21:06.718 13:50:09 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:21:06.718 13:50:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2683820' 00:21:06.718 killing process with pid 2683820 00:21:06.718 13:50:09 -- common/autotest_common.sh@955 -- # kill 2683820 00:21:06.718 13:50:09 -- common/autotest_common.sh@960 -- # wait 2683820 00:21:06.976 13:50:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:06.976 13:50:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:06.976 13:50:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:06.976 13:50:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:06.976 13:50:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:06.976 13:50:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.976 13:50:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.976 13:50:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.877 13:50:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:08.877 00:21:08.877 real 0m16.307s 00:21:08.877 user 0m47.158s 00:21:08.877 sys 0m7.194s 00:21:08.877 13:50:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:08.877 13:50:11 -- common/autotest_common.sh@10 -- # set +x 00:21:08.877 ************************************ 00:21:08.877 END TEST nvmf_target_disconnect 00:21:08.877 ************************************ 00:21:09.135 13:50:11 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:21:09.135 13:50:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:09.135 13:50:11 -- common/autotest_common.sh@10 -- # set +x 00:21:09.135 13:50:11 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:21:09.135 00:21:09.135 real 15m25.975s 00:21:09.135 user 35m55.048s 00:21:09.135 sys 4m14.375s 00:21:09.135 13:50:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:09.135 13:50:11 -- common/autotest_common.sh@10 -- # set +x 00:21:09.135 ************************************ 00:21:09.135 END TEST nvmf_tcp 00:21:09.135 ************************************ 00:21:09.135 13:50:11 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:21:09.135 13:50:11 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:09.135 13:50:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:09.135 13:50:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:09.135 13:50:11 -- common/autotest_common.sh@10 -- # set +x 00:21:09.135 ************************************ 00:21:09.135 START TEST spdkcli_nvmf_tcp 00:21:09.135 ************************************ 00:21:09.135 13:50:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:09.135 * Looking for test storage... 00:21:09.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:21:09.135 13:50:11 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:21:09.135 13:50:11 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:21:09.135 13:50:11 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:21:09.135 13:50:11 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.135 13:50:11 -- nvmf/common.sh@7 -- # uname -s 00:21:09.135 13:50:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.135 13:50:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.135 13:50:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.135 13:50:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.135 13:50:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.135 13:50:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.135 13:50:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.135 13:50:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.135 13:50:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.135 13:50:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.135 13:50:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:09.135 13:50:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:09.135 13:50:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.135 13:50:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.135 13:50:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:09.135 13:50:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.135 13:50:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:09.135 13:50:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.135 13:50:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.135 13:50:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.135 13:50:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.135 13:50:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.135 13:50:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.135 13:50:11 -- paths/export.sh@5 -- # export PATH 00:21:09.135 13:50:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.135 13:50:11 -- nvmf/common.sh@47 -- # : 0 00:21:09.135 13:50:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:09.135 13:50:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:09.135 13:50:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.135 13:50:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.135 13:50:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.135 13:50:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:09.135 13:50:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:09.135 13:50:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:09.135 13:50:11 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:21:09.135 13:50:11 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:21:09.135 13:50:11 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:21:09.135 13:50:11 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:21:09.135 13:50:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:09.135 13:50:11 -- common/autotest_common.sh@10 -- # set +x 00:21:09.135 13:50:11 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:21:09.135 13:50:11 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2684978 00:21:09.135 13:50:11 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:21:09.135 13:50:11 -- spdkcli/common.sh@34 -- # waitforlisten 2684978 00:21:09.135 13:50:11 -- common/autotest_common.sh@817 -- # '[' -z 2684978 ']' 00:21:09.135 13:50:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.135 13:50:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:09.135 13:50:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.135 13:50:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:09.135 13:50:11 -- common/autotest_common.sh@10 -- # set +x 00:21:09.393 [2024-04-18 13:50:11.954055] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:21:09.393 [2024-04-18 13:50:11.954150] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2684978 ] 00:21:09.393 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.393 [2024-04-18 13:50:12.017700] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:09.393 [2024-04-18 13:50:12.130198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.393 [2024-04-18 13:50:12.130207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.651 13:50:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:09.651 13:50:12 -- common/autotest_common.sh@850 -- # return 0 00:21:09.651 13:50:12 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:21:09.651 13:50:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:09.651 13:50:12 -- common/autotest_common.sh@10 -- # set +x 00:21:09.651 13:50:12 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:21:09.651 13:50:12 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:21:09.651 13:50:12 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:21:09.651 13:50:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:09.651 13:50:12 -- common/autotest_common.sh@10 -- # set +x 00:21:09.651 13:50:12 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:09.651 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:09.652 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:21:09.652 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:21:09.652 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:21:09.652 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:21:09.652 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:21:09.652 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:09.652 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:09.652 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:21:09.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:21:09.652 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:21:09.652 ' 00:21:09.909 [2024-04-18 13:50:12.685978] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:12.436 [2024-04-18 13:50:14.853232] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.367 [2024-04-18 13:50:16.093622] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:21:15.898 [2024-04-18 13:50:18.368722] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:21:17.794 [2024-04-18 13:50:20.330957] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:21:19.198 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:21:19.198 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:21:19.198 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:21:19.198 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:21:19.198 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:21:19.198 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:21:19.198 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:21:19.198 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:19.198 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:19.198 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:21:19.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:21:19.198 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:21:19.198 13:50:21 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:21:19.198 13:50:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:19.198 13:50:21 -- common/autotest_common.sh@10 -- # set +x 00:21:19.198 13:50:21 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:21:19.198 13:50:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:19.198 13:50:21 -- common/autotest_common.sh@10 -- # set +x 00:21:19.198 13:50:21 -- spdkcli/nvmf.sh@69 -- # check_match 00:21:19.198 13:50:21 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:21:19.775 13:50:22 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:21:19.775 13:50:22 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:21:19.775 13:50:22 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:21:19.775 13:50:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:19.775 13:50:22 -- common/autotest_common.sh@10 -- # set +x 00:21:19.775 13:50:22 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:21:19.775 13:50:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:19.775 13:50:22 -- common/autotest_common.sh@10 -- # set +x 00:21:19.775 13:50:22 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:21:19.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:21:19.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:19.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:21:19.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:21:19.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:21:19.775 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:21:19.775 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:19.775 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:21:19.775 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:21:19.775 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:21:19.775 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:21:19.775 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:21:19.775 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:21:19.775 ' 00:21:25.074 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:21:25.074 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:21:25.074 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:25.074 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:21:25.074 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:21:25.074 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:21:25.074 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:21:25.074 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:25.074 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:21:25.074 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:21:25.074 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:21:25.074 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:21:25.074 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:21:25.074 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:21:25.074 13:50:27 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:21:25.074 13:50:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:25.074 13:50:27 -- common/autotest_common.sh@10 -- # set +x 00:21:25.074 13:50:27 -- spdkcli/nvmf.sh@90 -- # killprocess 2684978 00:21:25.074 13:50:27 -- common/autotest_common.sh@936 -- # '[' -z 2684978 ']' 00:21:25.074 13:50:27 -- common/autotest_common.sh@940 -- # kill -0 2684978 00:21:25.074 13:50:27 -- common/autotest_common.sh@941 -- # uname 00:21:25.074 13:50:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:25.074 13:50:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2684978 00:21:25.074 13:50:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:25.074 13:50:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:25.074 13:50:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2684978' 00:21:25.074 killing process with pid 2684978 00:21:25.074 13:50:27 -- common/autotest_common.sh@955 -- # kill 2684978 00:21:25.074 [2024-04-18 13:50:27.764056] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:25.074 13:50:27 -- common/autotest_common.sh@960 -- # wait 2684978 00:21:25.332 13:50:28 -- spdkcli/nvmf.sh@1 -- # cleanup 00:21:25.332 13:50:28 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:21:25.332 13:50:28 -- spdkcli/common.sh@13 -- # '[' -n 2684978 ']' 00:21:25.332 13:50:28 -- spdkcli/common.sh@14 -- # killprocess 2684978 00:21:25.332 13:50:28 -- common/autotest_common.sh@936 -- # '[' -z 2684978 ']' 00:21:25.332 13:50:28 -- common/autotest_common.sh@940 -- # kill -0 2684978 00:21:25.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2684978) - No such process 00:21:25.332 13:50:28 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2684978 is not found' 00:21:25.332 Process with pid 2684978 is not found 00:21:25.332 13:50:28 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:25.332 13:50:28 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:25.332 13:50:28 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:25.332 00:21:25.332 real 0m16.207s 00:21:25.332 user 0m34.217s 00:21:25.332 sys 0m0.841s 00:21:25.332 13:50:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:25.332 13:50:28 -- common/autotest_common.sh@10 -- # set +x 00:21:25.332 ************************************ 00:21:25.332 END TEST spdkcli_nvmf_tcp 00:21:25.332 ************************************ 00:21:25.332 13:50:28 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:25.332 13:50:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:25.332 13:50:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:25.332 13:50:28 -- common/autotest_common.sh@10 -- # set +x 00:21:25.591 ************************************ 00:21:25.591 START TEST nvmf_identify_passthru 00:21:25.591 ************************************ 00:21:25.591 13:50:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:25.591 * Looking for test storage... 00:21:25.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:25.591 13:50:28 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.591 13:50:28 -- nvmf/common.sh@7 -- # uname -s 00:21:25.591 13:50:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.591 13:50:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.591 13:50:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.591 13:50:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.591 13:50:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.591 13:50:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.591 13:50:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.591 13:50:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.591 13:50:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.591 13:50:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.591 13:50:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:25.591 13:50:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:25.591 13:50:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.591 13:50:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.591 13:50:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.591 13:50:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.591 13:50:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.591 13:50:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.591 13:50:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.591 13:50:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.591 13:50:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.591 13:50:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.591 13:50:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.591 13:50:28 -- paths/export.sh@5 -- # export PATH 00:21:25.591 13:50:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.591 13:50:28 -- nvmf/common.sh@47 -- # : 0 00:21:25.591 13:50:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:25.591 13:50:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:25.591 13:50:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.591 13:50:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.591 13:50:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.591 13:50:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:25.591 13:50:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:25.591 13:50:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:25.591 13:50:28 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.591 13:50:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.591 13:50:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.591 13:50:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.591 13:50:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.592 13:50:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.592 13:50:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.592 13:50:28 -- paths/export.sh@5 -- # export PATH 00:21:25.592 13:50:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.592 13:50:28 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:21:25.592 13:50:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:25.592 13:50:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.592 13:50:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:25.592 13:50:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:25.592 13:50:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:25.592 13:50:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.592 13:50:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:25.592 13:50:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.592 13:50:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:25.592 13:50:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:25.592 13:50:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:25.592 13:50:28 -- common/autotest_common.sh@10 -- # set +x 00:21:27.492 13:50:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:27.492 13:50:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:27.492 13:50:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:27.492 13:50:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:27.492 13:50:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:27.492 13:50:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:27.492 13:50:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:27.492 13:50:30 -- nvmf/common.sh@295 -- # net_devs=() 00:21:27.492 13:50:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:27.492 13:50:30 -- nvmf/common.sh@296 -- # e810=() 00:21:27.492 13:50:30 -- nvmf/common.sh@296 -- # local -ga e810 00:21:27.492 13:50:30 -- nvmf/common.sh@297 -- # x722=() 00:21:27.492 13:50:30 -- nvmf/common.sh@297 -- # local -ga x722 00:21:27.492 13:50:30 -- nvmf/common.sh@298 -- # mlx=() 00:21:27.492 13:50:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:27.492 13:50:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.492 13:50:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.492 13:50:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.492 13:50:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.492 13:50:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.492 13:50:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.492 13:50:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.492 13:50:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.492 13:50:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.492 13:50:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.492 13:50:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.492 13:50:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:27.492 13:50:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:27.492 13:50:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:27.492 13:50:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.492 13:50:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:27.492 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:27.492 13:50:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.492 13:50:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:27.492 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:27.492 13:50:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:27.492 13:50:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.492 13:50:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.492 13:50:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:27.492 13:50:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.492 13:50:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:27.492 Found net devices under 0000:84:00.0: cvl_0_0 00:21:27.492 13:50:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.492 13:50:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.492 13:50:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.492 13:50:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:27.492 13:50:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.492 13:50:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:27.492 Found net devices under 0000:84:00.1: cvl_0_1 00:21:27.492 13:50:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.492 13:50:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:27.492 13:50:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:27.492 13:50:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:27.492 13:50:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:27.492 13:50:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.492 13:50:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.492 13:50:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.492 13:50:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:27.492 13:50:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.492 13:50:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.492 13:50:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:27.492 13:50:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.492 13:50:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.492 13:50:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:27.492 13:50:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:27.492 13:50:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.492 13:50:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.492 13:50:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.492 13:50:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.492 13:50:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:27.492 13:50:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.492 13:50:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.492 13:50:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.492 13:50:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:27.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:21:27.492 00:21:27.492 --- 10.0.0.2 ping statistics --- 00:21:27.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.492 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:21:27.492 13:50:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:21:27.492 00:21:27.492 --- 10.0.0.1 ping statistics --- 00:21:27.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.493 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:21:27.493 13:50:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.493 13:50:30 -- nvmf/common.sh@411 -- # return 0 00:21:27.493 13:50:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:27.493 13:50:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.493 13:50:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:27.493 13:50:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:27.493 13:50:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.493 13:50:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:27.493 13:50:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:27.750 13:50:30 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:21:27.750 13:50:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:27.750 13:50:30 -- common/autotest_common.sh@10 -- # set +x 00:21:27.750 13:50:30 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:21:27.750 13:50:30 -- common/autotest_common.sh@1510 -- # bdfs=() 00:21:27.750 13:50:30 -- common/autotest_common.sh@1510 -- # local bdfs 00:21:27.750 13:50:30 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:21:27.750 13:50:30 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:21:27.750 13:50:30 -- common/autotest_common.sh@1499 -- # bdfs=() 00:21:27.750 13:50:30 -- common/autotest_common.sh@1499 -- # local bdfs 00:21:27.750 13:50:30 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:27.750 13:50:30 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:27.751 13:50:30 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:21:27.751 13:50:30 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:21:27.751 13:50:30 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:82:00.0 00:21:27.751 13:50:30 -- common/autotest_common.sh@1513 -- # echo 0000:82:00.0 00:21:27.751 13:50:30 -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:21:27.751 13:50:30 -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:21:27.751 13:50:30 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:21:27.751 13:50:30 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:21:27.751 13:50:30 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:21:27.751 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.927 13:50:34 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:21:31.927 13:50:34 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:21:31.927 13:50:34 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:21:31.927 13:50:34 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:21:31.927 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.106 13:50:38 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:21:36.106 13:50:38 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:21:36.106 13:50:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:36.106 13:50:38 -- common/autotest_common.sh@10 -- # set +x 00:21:36.106 13:50:38 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:21:36.106 13:50:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:36.106 13:50:38 -- common/autotest_common.sh@10 -- # set +x 00:21:36.106 13:50:38 -- target/identify_passthru.sh@31 -- # nvmfpid=2689617 00:21:36.106 13:50:38 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:36.106 13:50:38 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:36.106 13:50:38 -- target/identify_passthru.sh@35 -- # waitforlisten 2689617 00:21:36.106 13:50:38 -- common/autotest_common.sh@817 -- # '[' -z 2689617 ']' 00:21:36.106 13:50:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.106 13:50:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:36.106 13:50:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.106 13:50:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:36.106 13:50:38 -- common/autotest_common.sh@10 -- # set +x 00:21:36.106 [2024-04-18 13:50:38.908337] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:21:36.106 [2024-04-18 13:50:38.908427] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.363 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.363 [2024-04-18 13:50:38.976727] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:36.363 [2024-04-18 13:50:39.086290] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.363 [2024-04-18 13:50:39.086353] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.363 [2024-04-18 13:50:39.086366] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.363 [2024-04-18 13:50:39.086378] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.363 [2024-04-18 13:50:39.086388] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.363 [2024-04-18 13:50:39.086440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.363 [2024-04-18 13:50:39.088196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.363 [2024-04-18 13:50:39.088272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:36.363 [2024-04-18 13:50:39.092192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.363 13:50:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:36.363 13:50:39 -- common/autotest_common.sh@850 -- # return 0 00:21:36.363 13:50:39 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:21:36.363 13:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.363 13:50:39 -- common/autotest_common.sh@10 -- # set +x 00:21:36.363 INFO: Log level set to 20 00:21:36.363 INFO: Requests: 00:21:36.363 { 00:21:36.363 "jsonrpc": "2.0", 00:21:36.363 "method": "nvmf_set_config", 00:21:36.363 "id": 1, 00:21:36.363 "params": { 00:21:36.363 "admin_cmd_passthru": { 00:21:36.363 "identify_ctrlr": true 00:21:36.364 } 00:21:36.364 } 00:21:36.364 } 00:21:36.364 00:21:36.364 INFO: response: 00:21:36.364 { 00:21:36.364 "jsonrpc": "2.0", 00:21:36.364 "id": 1, 00:21:36.364 "result": true 00:21:36.364 } 00:21:36.364 00:21:36.364 13:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.364 13:50:39 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:21:36.364 13:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.364 13:50:39 -- common/autotest_common.sh@10 -- # set +x 00:21:36.364 INFO: Setting log level to 20 00:21:36.364 INFO: Setting log level to 20 00:21:36.364 INFO: Log level set to 20 00:21:36.364 INFO: Log level set to 20 00:21:36.364 INFO: Requests: 00:21:36.364 { 00:21:36.364 "jsonrpc": "2.0", 00:21:36.364 "method": "framework_start_init", 00:21:36.364 "id": 1 00:21:36.364 } 00:21:36.364 00:21:36.364 INFO: Requests: 00:21:36.364 { 00:21:36.364 "jsonrpc": "2.0", 00:21:36.364 "method": "framework_start_init", 00:21:36.364 "id": 1 00:21:36.364 } 00:21:36.364 00:21:36.621 [2024-04-18 13:50:39.228391] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:21:36.621 INFO: response: 00:21:36.621 { 00:21:36.621 "jsonrpc": "2.0", 00:21:36.621 "id": 1, 00:21:36.621 "result": true 00:21:36.621 } 00:21:36.621 00:21:36.621 INFO: response: 00:21:36.621 { 00:21:36.621 "jsonrpc": "2.0", 00:21:36.621 "id": 1, 00:21:36.621 "result": true 00:21:36.621 } 00:21:36.621 00:21:36.621 13:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.621 13:50:39 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:36.621 13:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.621 13:50:39 -- common/autotest_common.sh@10 -- # set +x 00:21:36.621 INFO: Setting log level to 40 00:21:36.621 INFO: Setting log level to 40 00:21:36.621 INFO: Setting log level to 40 00:21:36.621 [2024-04-18 13:50:39.238383] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.621 13:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.621 13:50:39 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:21:36.621 13:50:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:36.621 13:50:39 -- common/autotest_common.sh@10 -- # set +x 00:21:36.621 13:50:39 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:21:36.621 13:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.621 13:50:39 -- common/autotest_common.sh@10 -- # set +x 00:21:39.896 Nvme0n1 00:21:39.896 13:50:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.896 13:50:42 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:21:39.896 13:50:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.896 13:50:42 -- common/autotest_common.sh@10 -- # set +x 00:21:39.896 13:50:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.896 13:50:42 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:39.896 13:50:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.896 13:50:42 -- common/autotest_common.sh@10 -- # set +x 00:21:39.896 13:50:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.896 13:50:42 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.896 13:50:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.896 13:50:42 -- common/autotest_common.sh@10 -- # set +x 00:21:39.896 [2024-04-18 13:50:42.132807] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.896 13:50:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.896 13:50:42 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:21:39.896 13:50:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.896 13:50:42 -- common/autotest_common.sh@10 -- # set +x 00:21:39.896 [2024-04-18 13:50:42.140574] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:39.896 [ 00:21:39.896 { 00:21:39.896 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:39.896 "subtype": "Discovery", 00:21:39.896 "listen_addresses": [], 00:21:39.896 "allow_any_host": true, 00:21:39.896 "hosts": [] 00:21:39.896 }, 00:21:39.896 { 00:21:39.896 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.896 "subtype": "NVMe", 00:21:39.896 "listen_addresses": [ 00:21:39.896 { 00:21:39.896 "transport": "TCP", 00:21:39.896 "trtype": "TCP", 00:21:39.896 "adrfam": "IPv4", 00:21:39.896 "traddr": "10.0.0.2", 00:21:39.896 "trsvcid": "4420" 00:21:39.896 } 00:21:39.896 ], 00:21:39.896 "allow_any_host": true, 00:21:39.896 "hosts": [], 00:21:39.896 "serial_number": "SPDK00000000000001", 00:21:39.896 "model_number": "SPDK bdev Controller", 00:21:39.896 "max_namespaces": 1, 00:21:39.896 "min_cntlid": 1, 00:21:39.896 "max_cntlid": 65519, 00:21:39.896 "namespaces": [ 00:21:39.896 { 00:21:39.896 "nsid": 1, 00:21:39.896 "bdev_name": "Nvme0n1", 00:21:39.896 "name": "Nvme0n1", 00:21:39.896 "nguid": "D351A95A70C743D298185489F54A0EBC", 00:21:39.896 "uuid": "d351a95a-70c7-43d2-9818-5489f54a0ebc" 00:21:39.896 } 00:21:39.896 ] 00:21:39.896 } 00:21:39.896 ] 00:21:39.896 13:50:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.896 13:50:42 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:39.896 13:50:42 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:21:39.896 13:50:42 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:21:39.896 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.896 13:50:42 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:21:39.896 13:50:42 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:39.896 13:50:42 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:21:39.896 13:50:42 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:21:39.896 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.896 13:50:42 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:21:39.896 13:50:42 -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:21:39.896 13:50:42 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:21:39.896 13:50:42 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.896 13:50:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.896 13:50:42 -- common/autotest_common.sh@10 -- # set +x 00:21:39.896 13:50:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.896 13:50:42 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:21:39.896 13:50:42 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:21:39.896 13:50:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:39.896 13:50:42 -- nvmf/common.sh@117 -- # sync 00:21:39.896 13:50:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:39.896 13:50:42 -- nvmf/common.sh@120 -- # set +e 00:21:39.896 13:50:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:39.896 13:50:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:39.896 rmmod nvme_tcp 00:21:39.896 rmmod nvme_fabrics 00:21:39.896 rmmod nvme_keyring 00:21:39.896 13:50:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:39.896 13:50:42 -- nvmf/common.sh@124 -- # set -e 00:21:39.896 13:50:42 -- nvmf/common.sh@125 -- # return 0 00:21:39.896 13:50:42 -- nvmf/common.sh@478 -- # '[' -n 2689617 ']' 00:21:39.896 13:50:42 -- nvmf/common.sh@479 -- # killprocess 2689617 00:21:39.896 13:50:42 -- common/autotest_common.sh@936 -- # '[' -z 2689617 ']' 00:21:39.896 13:50:42 -- common/autotest_common.sh@940 -- # kill -0 2689617 00:21:39.896 13:50:42 -- common/autotest_common.sh@941 -- # uname 00:21:39.896 13:50:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:39.896 13:50:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2689617 00:21:39.896 13:50:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:39.896 13:50:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:39.896 13:50:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2689617' 00:21:39.896 killing process with pid 2689617 00:21:39.896 13:50:42 -- common/autotest_common.sh@955 -- # kill 2689617 00:21:39.896 [2024-04-18 13:50:42.470854] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:39.896 13:50:42 -- common/autotest_common.sh@960 -- # wait 2689617 00:21:41.301 13:50:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:41.301 13:50:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:41.301 13:50:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:41.301 13:50:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:41.301 13:50:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:41.301 13:50:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.301 13:50:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:41.301 13:50:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.830 13:50:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:43.830 00:21:43.830 real 0m17.925s 00:21:43.830 user 0m26.233s 00:21:43.830 sys 0m2.273s 00:21:43.830 13:50:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:43.830 13:50:46 -- common/autotest_common.sh@10 -- # set +x 00:21:43.830 ************************************ 00:21:43.830 END TEST nvmf_identify_passthru 00:21:43.830 ************************************ 00:21:43.830 13:50:46 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:21:43.830 13:50:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:43.830 13:50:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:43.830 13:50:46 -- common/autotest_common.sh@10 -- # set +x 00:21:43.830 ************************************ 00:21:43.830 START TEST nvmf_dif 00:21:43.830 ************************************ 00:21:43.830 13:50:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:21:43.830 * Looking for test storage... 00:21:43.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:43.830 13:50:46 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.830 13:50:46 -- nvmf/common.sh@7 -- # uname -s 00:21:43.830 13:50:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.830 13:50:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.830 13:50:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.830 13:50:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.830 13:50:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.830 13:50:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.830 13:50:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.830 13:50:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.830 13:50:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.830 13:50:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.830 13:50:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:43.830 13:50:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:43.830 13:50:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.830 13:50:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.830 13:50:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.830 13:50:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.830 13:50:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.830 13:50:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.830 13:50:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.830 13:50:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.830 13:50:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.830 13:50:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.830 13:50:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.830 13:50:46 -- paths/export.sh@5 -- # export PATH 00:21:43.830 13:50:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.830 13:50:46 -- nvmf/common.sh@47 -- # : 0 00:21:43.830 13:50:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:43.830 13:50:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:43.830 13:50:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.830 13:50:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.830 13:50:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.830 13:50:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:43.830 13:50:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:43.830 13:50:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:43.830 13:50:46 -- target/dif.sh@15 -- # NULL_META=16 00:21:43.830 13:50:46 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:43.830 13:50:46 -- target/dif.sh@15 -- # NULL_SIZE=64 00:21:43.830 13:50:46 -- target/dif.sh@15 -- # NULL_DIF=1 00:21:43.830 13:50:46 -- target/dif.sh@135 -- # nvmftestinit 00:21:43.830 13:50:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:43.830 13:50:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.830 13:50:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:43.830 13:50:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:43.830 13:50:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:43.830 13:50:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.830 13:50:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:43.830 13:50:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.830 13:50:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:43.830 13:50:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:43.830 13:50:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:43.830 13:50:46 -- common/autotest_common.sh@10 -- # set +x 00:21:45.729 13:50:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:45.729 13:50:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:45.729 13:50:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:45.729 13:50:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:45.729 13:50:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:45.729 13:50:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:45.729 13:50:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:45.729 13:50:48 -- nvmf/common.sh@295 -- # net_devs=() 00:21:45.729 13:50:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:45.729 13:50:48 -- nvmf/common.sh@296 -- # e810=() 00:21:45.729 13:50:48 -- nvmf/common.sh@296 -- # local -ga e810 00:21:45.729 13:50:48 -- nvmf/common.sh@297 -- # x722=() 00:21:45.730 13:50:48 -- nvmf/common.sh@297 -- # local -ga x722 00:21:45.730 13:50:48 -- nvmf/common.sh@298 -- # mlx=() 00:21:45.730 13:50:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:45.730 13:50:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.730 13:50:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.730 13:50:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.730 13:50:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.730 13:50:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.730 13:50:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.730 13:50:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.730 13:50:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.730 13:50:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.730 13:50:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.730 13:50:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.730 13:50:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:45.730 13:50:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:45.730 13:50:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:45.730 13:50:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.730 13:50:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:45.730 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:45.730 13:50:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.730 13:50:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:45.730 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:45.730 13:50:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:45.730 13:50:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.730 13:50:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.730 13:50:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:45.730 13:50:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.730 13:50:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:45.730 Found net devices under 0000:84:00.0: cvl_0_0 00:21:45.730 13:50:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.730 13:50:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.730 13:50:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.730 13:50:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:45.730 13:50:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.730 13:50:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:45.730 Found net devices under 0000:84:00.1: cvl_0_1 00:21:45.730 13:50:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.730 13:50:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:45.730 13:50:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:45.730 13:50:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:45.730 13:50:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:45.730 13:50:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.730 13:50:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.730 13:50:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.730 13:50:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:45.730 13:50:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.730 13:50:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.730 13:50:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:45.730 13:50:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.730 13:50:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.730 13:50:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:45.730 13:50:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:45.730 13:50:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.730 13:50:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.730 13:50:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.730 13:50:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.730 13:50:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:45.730 13:50:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.730 13:50:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.730 13:50:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.730 13:50:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:45.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:21:45.730 00:21:45.730 --- 10.0.0.2 ping statistics --- 00:21:45.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.730 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:21:45.730 13:50:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:21:45.730 00:21:45.730 --- 10.0.0.1 ping statistics --- 00:21:45.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.730 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:21:45.730 13:50:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.730 13:50:48 -- nvmf/common.sh@411 -- # return 0 00:21:45.730 13:50:48 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:21:45.730 13:50:48 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:21:47.105 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:21:47.105 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:47.105 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:21:47.105 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:21:47.105 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:21:47.105 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:21:47.105 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:21:47.105 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:21:47.105 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:21:47.105 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:21:47.105 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:21:47.105 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:21:47.105 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:21:47.105 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:21:47.105 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:21:47.105 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:21:47.105 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:21:47.105 13:50:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.105 13:50:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:47.105 13:50:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:47.105 13:50:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.105 13:50:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:47.105 13:50:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:47.105 13:50:49 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:21:47.105 13:50:49 -- target/dif.sh@137 -- # nvmfappstart 00:21:47.105 13:50:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:47.105 13:50:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:47.105 13:50:49 -- common/autotest_common.sh@10 -- # set +x 00:21:47.105 13:50:49 -- nvmf/common.sh@470 -- # nvmfpid=2692916 00:21:47.105 13:50:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:47.105 13:50:49 -- nvmf/common.sh@471 -- # waitforlisten 2692916 00:21:47.105 13:50:49 -- common/autotest_common.sh@817 -- # '[' -z 2692916 ']' 00:21:47.105 13:50:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.105 13:50:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:47.105 13:50:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.105 13:50:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:47.105 13:50:49 -- common/autotest_common.sh@10 -- # set +x 00:21:47.105 [2024-04-18 13:50:49.759752] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:21:47.105 [2024-04-18 13:50:49.759835] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.105 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.105 [2024-04-18 13:50:49.824012] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.363 [2024-04-18 13:50:49.929851] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.363 [2024-04-18 13:50:49.929899] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.363 [2024-04-18 13:50:49.929929] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.363 [2024-04-18 13:50:49.929940] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.363 [2024-04-18 13:50:49.929951] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.363 [2024-04-18 13:50:49.929992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.363 13:50:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:47.363 13:50:50 -- common/autotest_common.sh@850 -- # return 0 00:21:47.363 13:50:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:47.363 13:50:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:47.363 13:50:50 -- common/autotest_common.sh@10 -- # set +x 00:21:47.363 13:50:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.363 13:50:50 -- target/dif.sh@139 -- # create_transport 00:21:47.363 13:50:50 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:21:47.363 13:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.363 13:50:50 -- common/autotest_common.sh@10 -- # set +x 00:21:47.363 [2024-04-18 13:50:50.078010] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.363 13:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.363 13:50:50 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:21:47.363 13:50:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:47.363 13:50:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:47.363 13:50:50 -- common/autotest_common.sh@10 -- # set +x 00:21:47.620 ************************************ 00:21:47.620 START TEST fio_dif_1_default 00:21:47.620 ************************************ 00:21:47.620 13:50:50 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:21:47.620 13:50:50 -- target/dif.sh@86 -- # create_subsystems 0 00:21:47.620 13:50:50 -- target/dif.sh@28 -- # local sub 00:21:47.620 13:50:50 -- target/dif.sh@30 -- # for sub in "$@" 00:21:47.620 13:50:50 -- target/dif.sh@31 -- # create_subsystem 0 00:21:47.620 13:50:50 -- target/dif.sh@18 -- # local sub_id=0 00:21:47.620 13:50:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:47.620 13:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.620 13:50:50 -- common/autotest_common.sh@10 -- # set +x 00:21:47.620 bdev_null0 00:21:47.620 13:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.620 13:50:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:47.620 13:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.620 13:50:50 -- common/autotest_common.sh@10 -- # set +x 00:21:47.620 13:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.620 13:50:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:47.620 13:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.620 13:50:50 -- common/autotest_common.sh@10 -- # set +x 00:21:47.620 13:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.620 13:50:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:47.620 13:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.620 13:50:50 -- common/autotest_common.sh@10 -- # set +x 00:21:47.620 [2024-04-18 13:50:50.230584] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.620 13:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.620 13:50:50 -- target/dif.sh@87 -- # fio /dev/fd/62 00:21:47.620 13:50:50 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:21:47.620 13:50:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:47.620 13:50:50 -- nvmf/common.sh@521 -- # config=() 00:21:47.620 13:50:50 -- nvmf/common.sh@521 -- # local subsystem config 00:21:47.620 13:50:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:47.620 13:50:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:47.620 { 00:21:47.620 "params": { 00:21:47.620 "name": "Nvme$subsystem", 00:21:47.620 "trtype": "$TEST_TRANSPORT", 00:21:47.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.620 "adrfam": "ipv4", 00:21:47.620 "trsvcid": "$NVMF_PORT", 00:21:47.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.620 "hdgst": ${hdgst:-false}, 00:21:47.620 "ddgst": ${ddgst:-false} 00:21:47.620 }, 00:21:47.620 "method": "bdev_nvme_attach_controller" 00:21:47.620 } 00:21:47.620 EOF 00:21:47.620 )") 00:21:47.620 13:50:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:47.620 13:50:50 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:47.620 13:50:50 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:47.620 13:50:50 -- target/dif.sh@82 -- # gen_fio_conf 00:21:47.620 13:50:50 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:47.620 13:50:50 -- target/dif.sh@54 -- # local file 00:21:47.620 13:50:50 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:47.620 13:50:50 -- target/dif.sh@56 -- # cat 00:21:47.620 13:50:50 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:47.620 13:50:50 -- common/autotest_common.sh@1327 -- # shift 00:21:47.620 13:50:50 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:47.620 13:50:50 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:47.620 13:50:50 -- nvmf/common.sh@543 -- # cat 00:21:47.620 13:50:50 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:47.620 13:50:50 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:47.620 13:50:50 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:47.620 13:50:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:47.620 13:50:50 -- target/dif.sh@72 -- # (( file <= files )) 00:21:47.620 13:50:50 -- nvmf/common.sh@545 -- # jq . 00:21:47.620 13:50:50 -- nvmf/common.sh@546 -- # IFS=, 00:21:47.620 13:50:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:47.620 "params": { 00:21:47.620 "name": "Nvme0", 00:21:47.620 "trtype": "tcp", 00:21:47.620 "traddr": "10.0.0.2", 00:21:47.620 "adrfam": "ipv4", 00:21:47.620 "trsvcid": "4420", 00:21:47.620 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:47.620 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:47.620 "hdgst": false, 00:21:47.620 "ddgst": false 00:21:47.620 }, 00:21:47.621 "method": "bdev_nvme_attach_controller" 00:21:47.621 }' 00:21:47.621 13:50:50 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:47.621 13:50:50 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:47.621 13:50:50 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:47.621 13:50:50 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:47.621 13:50:50 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:47.621 13:50:50 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:47.621 13:50:50 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:47.621 13:50:50 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:47.621 13:50:50 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:21:47.621 13:50:50 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:47.877 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:47.878 fio-3.35 00:21:47.878 Starting 1 thread 00:21:47.878 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.066 00:22:00.066 filename0: (groupid=0, jobs=1): err= 0: pid=2693150: Thu Apr 18 13:51:01 2024 00:22:00.066 read: IOPS=188, BW=754KiB/s (772kB/s)(7552KiB/10021msec) 00:22:00.066 slat (usec): min=4, max=317, avg= 9.58, stdev= 8.32 00:22:00.066 clat (usec): min=599, max=44604, avg=21201.20, stdev=20309.48 00:22:00.066 lat (usec): min=609, max=44635, avg=21210.78, stdev=20309.40 00:22:00.066 clat percentiles (usec): 00:22:00.066 | 1.00th=[ 627], 5.00th=[ 668], 10.00th=[ 701], 20.00th=[ 750], 00:22:00.066 | 30.00th=[ 816], 40.00th=[ 955], 50.00th=[41157], 60.00th=[41157], 00:22:00.066 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:22:00.066 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:22:00.066 | 99.99th=[44827] 00:22:00.066 bw ( KiB/s): min= 672, max= 768, per=99.92%, avg=753.60, stdev=30.22, samples=20 00:22:00.066 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:22:00.066 lat (usec) : 750=19.86%, 1000=23.99% 00:22:00.066 lat (msec) : 2=5.93%, 50=50.21% 00:22:00.066 cpu : usr=89.11%, sys=10.59%, ctx=13, majf=0, minf=239 00:22:00.066 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:00.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.066 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.066 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:00.066 00:22:00.066 Run status group 0 (all jobs): 00:22:00.066 READ: bw=754KiB/s (772kB/s), 754KiB/s-754KiB/s (772kB/s-772kB/s), io=7552KiB (7733kB), run=10021-10021msec 00:22:00.066 13:51:01 -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:00.066 13:51:01 -- target/dif.sh@43 -- # local sub 00:22:00.066 13:51:01 -- target/dif.sh@45 -- # for sub in "$@" 00:22:00.066 13:51:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:00.066 13:51:01 -- target/dif.sh@36 -- # local sub_id=0 00:22:00.066 13:51:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:00.066 13:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.066 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.066 13:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.066 13:51:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:00.066 13:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.066 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.066 13:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.066 00:22:00.066 real 0m11.223s 00:22:00.066 user 0m10.117s 00:22:00.066 sys 0m1.378s 00:22:00.066 13:51:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:00.066 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.066 ************************************ 00:22:00.066 END TEST fio_dif_1_default 00:22:00.066 ************************************ 00:22:00.066 13:51:01 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:00.066 13:51:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:00.066 13:51:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:00.066 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.066 ************************************ 00:22:00.066 START TEST fio_dif_1_multi_subsystems 00:22:00.066 ************************************ 00:22:00.066 13:51:01 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:22:00.066 13:51:01 -- target/dif.sh@92 -- # local files=1 00:22:00.066 13:51:01 -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:00.066 13:51:01 -- target/dif.sh@28 -- # local sub 00:22:00.066 13:51:01 -- target/dif.sh@30 -- # for sub in "$@" 00:22:00.066 13:51:01 -- target/dif.sh@31 -- # create_subsystem 0 00:22:00.066 13:51:01 -- target/dif.sh@18 -- # local sub_id=0 00:22:00.066 13:51:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:00.066 13:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.066 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.066 bdev_null0 00:22:00.066 13:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.066 13:51:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:00.066 13:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.066 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.066 13:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.066 13:51:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:00.066 13:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.066 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.066 13:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.066 13:51:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:00.066 13:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.066 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.066 [2024-04-18 13:51:01.557989] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.066 13:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.066 13:51:01 -- target/dif.sh@30 -- # for sub in "$@" 00:22:00.066 13:51:01 -- target/dif.sh@31 -- # create_subsystem 1 00:22:00.066 13:51:01 -- target/dif.sh@18 -- # local sub_id=1 00:22:00.066 13:51:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:00.066 13:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.066 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.066 bdev_null1 00:22:00.066 13:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.066 13:51:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:00.066 13:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.066 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.066 13:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.066 13:51:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:00.066 13:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.066 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.066 13:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.066 13:51:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.066 13:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.066 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:22:00.066 13:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.066 13:51:01 -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:00.066 13:51:01 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:00.066 13:51:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:00.066 13:51:01 -- nvmf/common.sh@521 -- # config=() 00:22:00.066 13:51:01 -- nvmf/common.sh@521 -- # local subsystem config 00:22:00.066 13:51:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:00.066 13:51:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:00.066 13:51:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:00.066 { 00:22:00.066 "params": { 00:22:00.067 "name": "Nvme$subsystem", 00:22:00.067 "trtype": "$TEST_TRANSPORT", 00:22:00.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.067 "adrfam": "ipv4", 00:22:00.067 "trsvcid": "$NVMF_PORT", 00:22:00.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.067 "hdgst": ${hdgst:-false}, 00:22:00.067 "ddgst": ${ddgst:-false} 00:22:00.067 }, 00:22:00.067 "method": "bdev_nvme_attach_controller" 00:22:00.067 } 00:22:00.067 EOF 00:22:00.067 )") 00:22:00.067 13:51:01 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:00.067 13:51:01 -- target/dif.sh@82 -- # gen_fio_conf 00:22:00.067 13:51:01 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:00.067 13:51:01 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:00.067 13:51:01 -- target/dif.sh@54 -- # local file 00:22:00.067 13:51:01 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:00.067 13:51:01 -- target/dif.sh@56 -- # cat 00:22:00.067 13:51:01 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:00.067 13:51:01 -- common/autotest_common.sh@1327 -- # shift 00:22:00.067 13:51:01 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:00.067 13:51:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.067 13:51:01 -- nvmf/common.sh@543 -- # cat 00:22:00.067 13:51:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:00.067 13:51:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:00.067 13:51:01 -- target/dif.sh@72 -- # (( file <= files )) 00:22:00.067 13:51:01 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:00.067 13:51:01 -- target/dif.sh@73 -- # cat 00:22:00.067 13:51:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:00.067 13:51:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:00.067 13:51:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:00.067 { 00:22:00.067 "params": { 00:22:00.067 "name": "Nvme$subsystem", 00:22:00.067 "trtype": "$TEST_TRANSPORT", 00:22:00.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.067 "adrfam": "ipv4", 00:22:00.067 "trsvcid": "$NVMF_PORT", 00:22:00.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.067 "hdgst": ${hdgst:-false}, 00:22:00.067 "ddgst": ${ddgst:-false} 00:22:00.067 }, 00:22:00.067 "method": "bdev_nvme_attach_controller" 00:22:00.067 } 00:22:00.067 EOF 00:22:00.067 )") 00:22:00.067 13:51:01 -- nvmf/common.sh@543 -- # cat 00:22:00.067 13:51:01 -- target/dif.sh@72 -- # (( file++ )) 00:22:00.067 13:51:01 -- target/dif.sh@72 -- # (( file <= files )) 00:22:00.067 13:51:01 -- nvmf/common.sh@545 -- # jq . 00:22:00.067 13:51:01 -- nvmf/common.sh@546 -- # IFS=, 00:22:00.067 13:51:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:00.067 "params": { 00:22:00.067 "name": "Nvme0", 00:22:00.067 "trtype": "tcp", 00:22:00.067 "traddr": "10.0.0.2", 00:22:00.067 "adrfam": "ipv4", 00:22:00.067 "trsvcid": "4420", 00:22:00.067 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:00.067 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:00.067 "hdgst": false, 00:22:00.067 "ddgst": false 00:22:00.067 }, 00:22:00.067 "method": "bdev_nvme_attach_controller" 00:22:00.067 },{ 00:22:00.067 "params": { 00:22:00.067 "name": "Nvme1", 00:22:00.067 "trtype": "tcp", 00:22:00.067 "traddr": "10.0.0.2", 00:22:00.067 "adrfam": "ipv4", 00:22:00.067 "trsvcid": "4420", 00:22:00.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.067 "hdgst": false, 00:22:00.067 "ddgst": false 00:22:00.067 }, 00:22:00.067 "method": "bdev_nvme_attach_controller" 00:22:00.067 }' 00:22:00.067 13:51:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:00.067 13:51:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:00.067 13:51:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.067 13:51:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:00.067 13:51:01 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:00.067 13:51:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:00.067 13:51:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:00.067 13:51:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:00.067 13:51:01 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:00.067 13:51:01 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:00.067 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:00.067 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:00.067 fio-3.35 00:22:00.067 Starting 2 threads 00:22:00.067 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.032 00:22:10.032 filename0: (groupid=0, jobs=1): err= 0: pid=2694671: Thu Apr 18 13:51:12 2024 00:22:10.032 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10021msec) 00:22:10.032 slat (nsec): min=7972, max=29010, avg=9899.39, stdev=2707.13 00:22:10.032 clat (usec): min=40860, max=43055, avg=41719.55, stdev=462.46 00:22:10.032 lat (usec): min=40868, max=43071, avg=41729.45, stdev=462.65 00:22:10.032 clat percentiles (usec): 00:22:10.032 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:22:10.032 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:22:10.032 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:22:10.032 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:22:10.032 | 99.99th=[43254] 00:22:10.032 bw ( KiB/s): min= 352, max= 384, per=33.98%, avg=382.40, stdev= 7.16, samples=20 00:22:10.032 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:22:10.032 lat (msec) : 50=100.00% 00:22:10.032 cpu : usr=94.17%, sys=5.49%, ctx=13, majf=0, minf=168 00:22:10.032 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:10.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.032 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.032 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:10.032 filename1: (groupid=0, jobs=1): err= 0: pid=2694672: Thu Apr 18 13:51:12 2024 00:22:10.032 read: IOPS=185, BW=742KiB/s (760kB/s)(7424KiB/10003msec) 00:22:10.032 slat (nsec): min=6476, max=39802, avg=9786.39, stdev=3358.57 00:22:10.032 clat (usec): min=651, max=42986, avg=21528.52, stdev=20568.33 00:22:10.032 lat (usec): min=659, max=43010, avg=21538.31, stdev=20568.21 00:22:10.032 clat percentiles (usec): 00:22:10.032 | 1.00th=[ 685], 5.00th=[ 717], 10.00th=[ 742], 20.00th=[ 775], 00:22:10.032 | 30.00th=[ 791], 40.00th=[ 816], 50.00th=[40633], 60.00th=[41157], 00:22:10.032 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:22:10.032 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:22:10.032 | 99.99th=[42730] 00:22:10.032 bw ( KiB/s): min= 672, max= 768, per=65.83%, avg=740.80, stdev=33.28, samples=20 00:22:10.032 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:22:10.032 lat (usec) : 750=12.34%, 1000=35.29% 00:22:10.032 lat (msec) : 2=1.94%, 50=50.43% 00:22:10.032 cpu : usr=94.29%, sys=5.38%, ctx=33, majf=0, minf=53 00:22:10.032 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:10.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:10.032 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:10.032 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:10.032 00:22:10.032 Run status group 0 (all jobs): 00:22:10.032 READ: bw=1124KiB/s (1151kB/s), 383KiB/s-742KiB/s (392kB/s-760kB/s), io=11.0MiB (11.5MB), run=10003-10021msec 00:22:10.292 13:51:12 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:22:10.292 13:51:12 -- target/dif.sh@43 -- # local sub 00:22:10.292 13:51:12 -- target/dif.sh@45 -- # for sub in "$@" 00:22:10.292 13:51:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:10.292 13:51:12 -- target/dif.sh@36 -- # local sub_id=0 00:22:10.292 13:51:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:10.292 13:51:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.292 13:51:12 -- common/autotest_common.sh@10 -- # set +x 00:22:10.292 13:51:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.292 13:51:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:10.292 13:51:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.292 13:51:12 -- common/autotest_common.sh@10 -- # set +x 00:22:10.292 13:51:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.292 13:51:12 -- target/dif.sh@45 -- # for sub in "$@" 00:22:10.292 13:51:12 -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:10.292 13:51:12 -- target/dif.sh@36 -- # local sub_id=1 00:22:10.292 13:51:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:10.292 13:51:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.292 13:51:12 -- common/autotest_common.sh@10 -- # set +x 00:22:10.292 13:51:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.292 13:51:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:10.292 13:51:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.292 13:51:12 -- common/autotest_common.sh@10 -- # set +x 00:22:10.292 13:51:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.292 00:22:10.292 real 0m11.446s 00:22:10.292 user 0m20.330s 00:22:10.292 sys 0m1.414s 00:22:10.292 13:51:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:10.292 13:51:12 -- common/autotest_common.sh@10 -- # set +x 00:22:10.292 ************************************ 00:22:10.292 END TEST fio_dif_1_multi_subsystems 00:22:10.292 ************************************ 00:22:10.292 13:51:12 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:22:10.292 13:51:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:10.292 13:51:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:10.292 13:51:13 -- common/autotest_common.sh@10 -- # set +x 00:22:10.550 ************************************ 00:22:10.550 START TEST fio_dif_rand_params 00:22:10.550 ************************************ 00:22:10.550 13:51:13 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:22:10.550 13:51:13 -- target/dif.sh@100 -- # local NULL_DIF 00:22:10.550 13:51:13 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:22:10.550 13:51:13 -- target/dif.sh@103 -- # NULL_DIF=3 00:22:10.550 13:51:13 -- target/dif.sh@103 -- # bs=128k 00:22:10.550 13:51:13 -- target/dif.sh@103 -- # numjobs=3 00:22:10.550 13:51:13 -- target/dif.sh@103 -- # iodepth=3 00:22:10.550 13:51:13 -- target/dif.sh@103 -- # runtime=5 00:22:10.550 13:51:13 -- target/dif.sh@105 -- # create_subsystems 0 00:22:10.550 13:51:13 -- target/dif.sh@28 -- # local sub 00:22:10.550 13:51:13 -- target/dif.sh@30 -- # for sub in "$@" 00:22:10.550 13:51:13 -- target/dif.sh@31 -- # create_subsystem 0 00:22:10.550 13:51:13 -- target/dif.sh@18 -- # local sub_id=0 00:22:10.550 13:51:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:10.550 13:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.550 13:51:13 -- common/autotest_common.sh@10 -- # set +x 00:22:10.550 bdev_null0 00:22:10.550 13:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.550 13:51:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:10.550 13:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.550 13:51:13 -- common/autotest_common.sh@10 -- # set +x 00:22:10.550 13:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.550 13:51:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:10.550 13:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.550 13:51:13 -- common/autotest_common.sh@10 -- # set +x 00:22:10.550 13:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.550 13:51:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:10.550 13:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.550 13:51:13 -- common/autotest_common.sh@10 -- # set +x 00:22:10.550 [2024-04-18 13:51:13.136282] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.550 13:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.550 13:51:13 -- target/dif.sh@106 -- # fio /dev/fd/62 00:22:10.550 13:51:13 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:22:10.550 13:51:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:10.550 13:51:13 -- nvmf/common.sh@521 -- # config=() 00:22:10.550 13:51:13 -- nvmf/common.sh@521 -- # local subsystem config 00:22:10.550 13:51:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:10.550 13:51:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:10.551 13:51:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:10.551 { 00:22:10.551 "params": { 00:22:10.551 "name": "Nvme$subsystem", 00:22:10.551 "trtype": "$TEST_TRANSPORT", 00:22:10.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.551 "adrfam": "ipv4", 00:22:10.551 "trsvcid": "$NVMF_PORT", 00:22:10.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.551 "hdgst": ${hdgst:-false}, 00:22:10.551 "ddgst": ${ddgst:-false} 00:22:10.551 }, 00:22:10.551 "method": "bdev_nvme_attach_controller" 00:22:10.551 } 00:22:10.551 EOF 00:22:10.551 )") 00:22:10.551 13:51:13 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:10.551 13:51:13 -- target/dif.sh@82 -- # gen_fio_conf 00:22:10.551 13:51:13 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:10.551 13:51:13 -- target/dif.sh@54 -- # local file 00:22:10.551 13:51:13 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:10.551 13:51:13 -- target/dif.sh@56 -- # cat 00:22:10.551 13:51:13 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:10.551 13:51:13 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:10.551 13:51:13 -- common/autotest_common.sh@1327 -- # shift 00:22:10.551 13:51:13 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:10.551 13:51:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:10.551 13:51:13 -- nvmf/common.sh@543 -- # cat 00:22:10.551 13:51:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:10.551 13:51:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:10.551 13:51:13 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:10.551 13:51:13 -- target/dif.sh@72 -- # (( file <= files )) 00:22:10.551 13:51:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:10.551 13:51:13 -- nvmf/common.sh@545 -- # jq . 00:22:10.551 13:51:13 -- nvmf/common.sh@546 -- # IFS=, 00:22:10.551 13:51:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:10.551 "params": { 00:22:10.551 "name": "Nvme0", 00:22:10.551 "trtype": "tcp", 00:22:10.551 "traddr": "10.0.0.2", 00:22:10.551 "adrfam": "ipv4", 00:22:10.551 "trsvcid": "4420", 00:22:10.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:10.551 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:10.551 "hdgst": false, 00:22:10.551 "ddgst": false 00:22:10.551 }, 00:22:10.551 "method": "bdev_nvme_attach_controller" 00:22:10.551 }' 00:22:10.551 13:51:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:10.551 13:51:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:10.551 13:51:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:10.551 13:51:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:10.551 13:51:13 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:10.551 13:51:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:10.551 13:51:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:10.551 13:51:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:10.551 13:51:13 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:10.551 13:51:13 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:10.809 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:10.809 ... 00:22:10.809 fio-3.35 00:22:10.809 Starting 3 threads 00:22:10.809 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.397 00:22:17.397 filename0: (groupid=0, jobs=1): err= 0: pid=2696592: Thu Apr 18 13:51:19 2024 00:22:17.397 read: IOPS=185, BW=23.1MiB/s (24.3MB/s)(117MiB/5046msec) 00:22:17.397 slat (nsec): min=6169, max=36358, avg=13589.01, stdev=3469.61 00:22:17.397 clat (usec): min=4563, max=58759, avg=16143.37, stdev=13652.80 00:22:17.397 lat (usec): min=4575, max=58772, avg=16156.95, stdev=13652.63 00:22:17.397 clat percentiles (usec): 00:22:17.397 | 1.00th=[ 4948], 5.00th=[ 5604], 10.00th=[ 6915], 20.00th=[ 8848], 00:22:17.397 | 30.00th=[ 9503], 40.00th=[10421], 50.00th=[11731], 60.00th=[13173], 00:22:17.397 | 70.00th=[14222], 80.00th=[15926], 90.00th=[49546], 95.00th=[52167], 00:22:17.397 | 99.00th=[55313], 99.50th=[55837], 99.90th=[58983], 99.95th=[58983], 00:22:17.397 | 99.99th=[58983] 00:22:17.397 bw ( KiB/s): min=15360, max=30268, per=31.18%, avg=23839.60, stdev=4995.19, samples=10 00:22:17.397 iops : min= 120, max= 236, avg=186.20, stdev=38.96, samples=10 00:22:17.397 lat (msec) : 10=36.51%, 20=50.96%, 50=3.53%, 100=8.99% 00:22:17.397 cpu : usr=92.11%, sys=7.35%, ctx=28, majf=0, minf=119 00:22:17.397 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:17.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.397 issued rwts: total=934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.397 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:17.397 filename0: (groupid=0, jobs=1): err= 0: pid=2696593: Thu Apr 18 13:51:19 2024 00:22:17.397 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(133MiB/5046msec) 00:22:17.397 slat (nsec): min=5662, max=94934, avg=14061.67, stdev=4955.64 00:22:17.397 clat (usec): min=4789, max=91142, avg=14126.30, stdev=12474.03 00:22:17.397 lat (usec): min=4802, max=91170, avg=14140.36, stdev=12474.35 00:22:17.397 clat percentiles (usec): 00:22:17.397 | 1.00th=[ 5145], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 7832], 00:22:17.397 | 30.00th=[ 8979], 40.00th=[ 9765], 50.00th=[10552], 60.00th=[11731], 00:22:17.398 | 70.00th=[12911], 80.00th=[14091], 90.00th=[16909], 95.00th=[51643], 00:22:17.398 | 99.00th=[55313], 99.50th=[55837], 99.90th=[89654], 99.95th=[90702], 00:22:17.398 | 99.99th=[90702] 00:22:17.398 bw ( KiB/s): min=15616, max=33792, per=35.66%, avg=27264.00, stdev=5668.41, samples=10 00:22:17.398 iops : min= 122, max= 264, avg=213.00, stdev=44.28, samples=10 00:22:17.398 lat (msec) : 10=44.14%, 20=47.24%, 50=2.06%, 100=6.56% 00:22:17.398 cpu : usr=89.44%, sys=8.25%, ctx=303, majf=0, minf=148 00:22:17.398 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:17.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.398 issued rwts: total=1067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:17.398 filename0: (groupid=0, jobs=1): err= 0: pid=2696594: Thu Apr 18 13:51:19 2024 00:22:17.398 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(127MiB/5004msec) 00:22:17.398 slat (nsec): min=4626, max=41800, avg=16002.99, stdev=4928.83 00:22:17.398 clat (usec): min=5219, max=57284, avg=14793.38, stdev=13397.90 00:22:17.398 lat (usec): min=5232, max=57315, avg=14809.38, stdev=13398.23 00:22:17.398 clat percentiles (usec): 00:22:17.398 | 1.00th=[ 5538], 5.00th=[ 6063], 10.00th=[ 6652], 20.00th=[ 8225], 00:22:17.398 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10552], 60.00th=[11600], 00:22:17.398 | 70.00th=[12387], 80.00th=[13435], 90.00th=[48497], 95.00th=[52691], 00:22:17.398 | 99.00th=[54789], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:22:17.398 | 99.99th=[57410] 00:22:17.398 bw ( KiB/s): min=17920, max=37632, per=33.85%, avg=25881.60, stdev=6064.61, samples=10 00:22:17.398 iops : min= 140, max= 294, avg=202.20, stdev=47.38, samples=10 00:22:17.398 lat (msec) : 10=44.82%, 20=44.23%, 50=1.78%, 100=9.18% 00:22:17.398 cpu : usr=91.78%, sys=7.64%, ctx=8, majf=0, minf=76 00:22:17.398 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:17.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.398 issued rwts: total=1013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:17.398 00:22:17.398 Run status group 0 (all jobs): 00:22:17.398 READ: bw=74.7MiB/s (78.3MB/s), 23.1MiB/s-26.4MiB/s (24.3MB/s-27.7MB/s), io=377MiB (395MB), run=5004-5046msec 00:22:17.398 13:51:19 -- target/dif.sh@107 -- # destroy_subsystems 0 00:22:17.398 13:51:19 -- target/dif.sh@43 -- # local sub 00:22:17.398 13:51:19 -- target/dif.sh@45 -- # for sub in "$@" 00:22:17.398 13:51:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:17.398 13:51:19 -- target/dif.sh@36 -- # local sub_id=0 00:22:17.398 13:51:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:17.398 13:51:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.398 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:22:17.398 13:51:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.398 13:51:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:17.398 13:51:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.398 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:22:17.398 13:51:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.398 13:51:19 -- target/dif.sh@109 -- # NULL_DIF=2 00:22:17.398 13:51:19 -- target/dif.sh@109 -- # bs=4k 00:22:17.398 13:51:19 -- target/dif.sh@109 -- # numjobs=8 00:22:17.398 13:51:19 -- target/dif.sh@109 -- # iodepth=16 00:22:17.398 13:51:19 -- target/dif.sh@109 -- # runtime= 00:22:17.398 13:51:19 -- target/dif.sh@109 -- # files=2 00:22:17.398 13:51:19 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:22:17.398 13:51:19 -- target/dif.sh@28 -- # local sub 00:22:17.398 13:51:19 -- target/dif.sh@30 -- # for sub in "$@" 00:22:17.398 13:51:19 -- target/dif.sh@31 -- # create_subsystem 0 00:22:17.398 13:51:19 -- target/dif.sh@18 -- # local sub_id=0 00:22:17.398 13:51:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:22:17.398 13:51:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.398 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:22:17.398 bdev_null0 00:22:17.398 13:51:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.398 13:51:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:17.398 13:51:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.398 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:22:17.398 13:51:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.398 13:51:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:17.398 13:51:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.398 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:22:17.398 13:51:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.398 13:51:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:17.398 13:51:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.398 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:22:17.398 [2024-04-18 13:51:19.354560] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.398 13:51:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.398 13:51:19 -- target/dif.sh@30 -- # for sub in "$@" 00:22:17.398 13:51:19 -- target/dif.sh@31 -- # create_subsystem 1 00:22:17.398 13:51:19 -- target/dif.sh@18 -- # local sub_id=1 00:22:17.398 13:51:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:22:17.398 13:51:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.398 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:22:17.398 bdev_null1 00:22:17.398 13:51:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.398 13:51:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:17.398 13:51:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.398 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:22:17.398 13:51:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.398 13:51:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:17.398 13:51:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.398 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:22:17.398 13:51:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.398 13:51:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.398 13:51:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.398 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:22:17.398 13:51:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.398 13:51:19 -- target/dif.sh@30 -- # for sub in "$@" 00:22:17.398 13:51:19 -- target/dif.sh@31 -- # create_subsystem 2 00:22:17.398 13:51:19 -- target/dif.sh@18 -- # local sub_id=2 00:22:17.398 13:51:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:22:17.398 13:51:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.398 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:22:17.398 bdev_null2 00:22:17.398 13:51:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.398 13:51:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:22:17.398 13:51:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.398 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:22:17.398 13:51:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.398 13:51:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:22:17.398 13:51:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.398 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:22:17.398 13:51:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.398 13:51:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:17.398 13:51:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.398 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:22:17.398 13:51:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.398 13:51:19 -- target/dif.sh@112 -- # fio /dev/fd/62 00:22:17.398 13:51:19 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:22:17.398 13:51:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:22:17.398 13:51:19 -- nvmf/common.sh@521 -- # config=() 00:22:17.398 13:51:19 -- nvmf/common.sh@521 -- # local subsystem config 00:22:17.398 13:51:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:17.398 13:51:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:17.398 13:51:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:17.398 { 00:22:17.398 "params": { 00:22:17.398 "name": "Nvme$subsystem", 00:22:17.398 "trtype": "$TEST_TRANSPORT", 00:22:17.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.398 "adrfam": "ipv4", 00:22:17.398 "trsvcid": "$NVMF_PORT", 00:22:17.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.398 "hdgst": ${hdgst:-false}, 00:22:17.398 "ddgst": ${ddgst:-false} 00:22:17.398 }, 00:22:17.398 "method": "bdev_nvme_attach_controller" 00:22:17.398 } 00:22:17.398 EOF 00:22:17.398 )") 00:22:17.398 13:51:19 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:17.398 13:51:19 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:17.398 13:51:19 -- target/dif.sh@82 -- # gen_fio_conf 00:22:17.398 13:51:19 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:17.398 13:51:19 -- target/dif.sh@54 -- # local file 00:22:17.398 13:51:19 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:17.398 13:51:19 -- target/dif.sh@56 -- # cat 00:22:17.398 13:51:19 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:17.398 13:51:19 -- common/autotest_common.sh@1327 -- # shift 00:22:17.398 13:51:19 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:17.398 13:51:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:17.398 13:51:19 -- nvmf/common.sh@543 -- # cat 00:22:17.398 13:51:19 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:17.398 13:51:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:17.398 13:51:19 -- target/dif.sh@72 -- # (( file <= files )) 00:22:17.399 13:51:19 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:17.399 13:51:19 -- target/dif.sh@73 -- # cat 00:22:17.399 13:51:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:17.399 13:51:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:17.399 13:51:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:17.399 { 00:22:17.399 "params": { 00:22:17.399 "name": "Nvme$subsystem", 00:22:17.399 "trtype": "$TEST_TRANSPORT", 00:22:17.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.399 "adrfam": "ipv4", 00:22:17.399 "trsvcid": "$NVMF_PORT", 00:22:17.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.399 "hdgst": ${hdgst:-false}, 00:22:17.399 "ddgst": ${ddgst:-false} 00:22:17.399 }, 00:22:17.399 "method": "bdev_nvme_attach_controller" 00:22:17.399 } 00:22:17.399 EOF 00:22:17.399 )") 00:22:17.399 13:51:19 -- nvmf/common.sh@543 -- # cat 00:22:17.399 13:51:19 -- target/dif.sh@72 -- # (( file++ )) 00:22:17.399 13:51:19 -- target/dif.sh@72 -- # (( file <= files )) 00:22:17.399 13:51:19 -- target/dif.sh@73 -- # cat 00:22:17.399 13:51:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:17.399 13:51:19 -- target/dif.sh@72 -- # (( file++ )) 00:22:17.399 13:51:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:17.399 { 00:22:17.399 "params": { 00:22:17.399 "name": "Nvme$subsystem", 00:22:17.399 "trtype": "$TEST_TRANSPORT", 00:22:17.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.399 "adrfam": "ipv4", 00:22:17.399 "trsvcid": "$NVMF_PORT", 00:22:17.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.399 "hdgst": ${hdgst:-false}, 00:22:17.399 "ddgst": ${ddgst:-false} 00:22:17.399 }, 00:22:17.399 "method": "bdev_nvme_attach_controller" 00:22:17.399 } 00:22:17.399 EOF 00:22:17.399 )") 00:22:17.399 13:51:19 -- target/dif.sh@72 -- # (( file <= files )) 00:22:17.399 13:51:19 -- nvmf/common.sh@543 -- # cat 00:22:17.399 13:51:19 -- nvmf/common.sh@545 -- # jq . 00:22:17.399 13:51:19 -- nvmf/common.sh@546 -- # IFS=, 00:22:17.399 13:51:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:17.399 "params": { 00:22:17.399 "name": "Nvme0", 00:22:17.399 "trtype": "tcp", 00:22:17.399 "traddr": "10.0.0.2", 00:22:17.399 "adrfam": "ipv4", 00:22:17.399 "trsvcid": "4420", 00:22:17.399 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:17.399 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:17.399 "hdgst": false, 00:22:17.399 "ddgst": false 00:22:17.399 }, 00:22:17.399 "method": "bdev_nvme_attach_controller" 00:22:17.399 },{ 00:22:17.399 "params": { 00:22:17.399 "name": "Nvme1", 00:22:17.399 "trtype": "tcp", 00:22:17.399 "traddr": "10.0.0.2", 00:22:17.399 "adrfam": "ipv4", 00:22:17.399 "trsvcid": "4420", 00:22:17.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.399 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.399 "hdgst": false, 00:22:17.399 "ddgst": false 00:22:17.399 }, 00:22:17.399 "method": "bdev_nvme_attach_controller" 00:22:17.399 },{ 00:22:17.399 "params": { 00:22:17.399 "name": "Nvme2", 00:22:17.399 "trtype": "tcp", 00:22:17.399 "traddr": "10.0.0.2", 00:22:17.399 "adrfam": "ipv4", 00:22:17.399 "trsvcid": "4420", 00:22:17.399 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:17.399 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:17.399 "hdgst": false, 00:22:17.399 "ddgst": false 00:22:17.399 }, 00:22:17.399 "method": "bdev_nvme_attach_controller" 00:22:17.399 }' 00:22:17.399 13:51:19 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:17.399 13:51:19 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:17.399 13:51:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:17.399 13:51:19 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:17.399 13:51:19 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:17.399 13:51:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:17.399 13:51:19 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:17.399 13:51:19 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:17.399 13:51:19 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:17.399 13:51:19 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:17.399 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:17.399 ... 00:22:17.399 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:17.399 ... 00:22:17.399 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:17.399 ... 00:22:17.399 fio-3.35 00:22:17.399 Starting 24 threads 00:22:17.399 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.609 00:22:29.609 filename0: (groupid=0, jobs=1): err= 0: pid=2697452: Thu Apr 18 13:51:30 2024 00:22:29.609 read: IOPS=85, BW=343KiB/s (352kB/s)(3472KiB/10111msec) 00:22:29.609 slat (nsec): min=5027, max=81028, avg=13629.63, stdev=9081.62 00:22:29.609 clat (msec): min=85, max=339, avg=185.43, stdev=41.28 00:22:29.609 lat (msec): min=85, max=339, avg=185.44, stdev=41.28 00:22:29.609 clat percentiles (msec): 00:22:29.609 | 1.00th=[ 86], 5.00th=[ 128], 10.00th=[ 130], 20.00th=[ 144], 00:22:29.609 | 30.00th=[ 165], 40.00th=[ 174], 50.00th=[ 190], 60.00th=[ 197], 00:22:29.609 | 70.00th=[ 205], 80.00th=[ 222], 90.00th=[ 236], 95.00th=[ 247], 00:22:29.609 | 99.00th=[ 288], 99.50th=[ 321], 99.90th=[ 338], 99.95th=[ 338], 00:22:29.609 | 99.99th=[ 338] 00:22:29.609 bw ( KiB/s): min= 256, max= 512, per=5.56%, avg=340.80, stdev=75.78, samples=20 00:22:29.609 iops : min= 64, max= 128, avg=85.20, stdev=18.94, samples=20 00:22:29.609 lat (msec) : 100=1.84%, 250=94.47%, 500=3.69% 00:22:29.609 cpu : usr=96.82%, sys=2.21%, ctx=41, majf=0, minf=77 00:22:29.609 IO depths : 1=1.6%, 2=5.4%, 4=17.4%, 8=64.5%, 16=11.1%, 32=0.0%, >=64=0.0% 00:22:29.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.610 complete : 0=0.0%, 4=91.9%, 8=2.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.610 issued rwts: total=868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.610 filename0: (groupid=0, jobs=1): err= 0: pid=2697453: Thu Apr 18 13:51:30 2024 00:22:29.610 read: IOPS=71, BW=288KiB/s (295kB/s)(2904KiB/10096msec) 00:22:29.610 slat (nsec): min=8103, max=94114, avg=27220.06, stdev=23676.50 00:22:29.610 clat (msec): min=136, max=364, avg=221.69, stdev=35.41 00:22:29.610 lat (msec): min=136, max=364, avg=221.71, stdev=35.42 00:22:29.610 clat percentiles (msec): 00:22:29.610 | 1.00th=[ 169], 5.00th=[ 171], 10.00th=[ 180], 20.00th=[ 190], 00:22:29.610 | 30.00th=[ 199], 40.00th=[ 213], 50.00th=[ 220], 60.00th=[ 228], 00:22:29.610 | 70.00th=[ 239], 80.00th=[ 251], 90.00th=[ 271], 95.00th=[ 292], 00:22:29.610 | 99.00th=[ 296], 99.50th=[ 347], 99.90th=[ 363], 99.95th=[ 363], 00:22:29.610 | 99.99th=[ 363] 00:22:29.610 bw ( KiB/s): min= 128, max= 384, per=4.64%, avg=284.00, stdev=63.97, samples=20 00:22:29.610 iops : min= 32, max= 96, avg=71.00, stdev=15.99, samples=20 00:22:29.610 lat (msec) : 250=80.44%, 500=19.56% 00:22:29.610 cpu : usr=98.26%, sys=1.34%, ctx=23, majf=0, minf=50 00:22:29.610 IO depths : 1=3.0%, 2=8.4%, 4=22.3%, 8=56.7%, 16=9.5%, 32=0.0%, >=64=0.0% 00:22:29.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.610 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.610 issued rwts: total=726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.610 filename0: (groupid=0, jobs=1): err= 0: pid=2697454: Thu Apr 18 13:51:30 2024 00:22:29.610 read: IOPS=57, BW=229KiB/s (234kB/s)(2304KiB/10080msec) 00:22:29.610 slat (usec): min=8, max=103, avg=49.24, stdev=27.54 00:22:29.610 clat (msec): min=129, max=467, avg=279.55, stdev=52.41 00:22:29.610 lat (msec): min=129, max=467, avg=279.60, stdev=52.40 00:22:29.610 clat percentiles (msec): 00:22:29.610 | 1.00th=[ 171], 5.00th=[ 194], 10.00th=[ 205], 20.00th=[ 236], 00:22:29.610 | 30.00th=[ 255], 40.00th=[ 271], 50.00th=[ 284], 60.00th=[ 296], 00:22:29.610 | 70.00th=[ 300], 80.00th=[ 317], 90.00th=[ 351], 95.00th=[ 380], 00:22:29.610 | 99.00th=[ 414], 99.50th=[ 435], 99.90th=[ 468], 99.95th=[ 468], 00:22:29.610 | 99.99th=[ 468] 00:22:29.610 bw ( KiB/s): min= 128, max= 384, per=3.65%, avg=224.00, stdev=70.42, samples=20 00:22:29.610 iops : min= 32, max= 96, avg=56.00, stdev=17.60, samples=20 00:22:29.610 lat (msec) : 250=26.22%, 500=73.78% 00:22:29.610 cpu : usr=97.99%, sys=1.59%, ctx=16, majf=0, minf=58 00:22:29.610 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:22:29.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.610 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.610 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.610 filename0: (groupid=0, jobs=1): err= 0: pid=2697455: Thu Apr 18 13:51:30 2024 00:22:29.610 read: IOPS=57, BW=229KiB/s (234kB/s)(2304KiB/10075msec) 00:22:29.610 slat (nsec): min=8501, max=97509, avg=57718.77, stdev=26141.24 00:22:29.610 clat (msec): min=173, max=480, avg=279.35, stdev=52.34 00:22:29.610 lat (msec): min=173, max=480, avg=279.41, stdev=52.33 00:22:29.610 clat percentiles (msec): 00:22:29.610 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 205], 20.00th=[ 232], 00:22:29.610 | 30.00th=[ 257], 40.00th=[ 275], 50.00th=[ 284], 60.00th=[ 296], 00:22:29.610 | 70.00th=[ 300], 80.00th=[ 317], 90.00th=[ 347], 95.00th=[ 372], 00:22:29.610 | 99.00th=[ 384], 99.50th=[ 472], 99.90th=[ 481], 99.95th=[ 481], 00:22:29.610 | 99.99th=[ 481] 00:22:29.610 bw ( KiB/s): min= 127, max= 384, per=3.65%, avg=223.95, stdev=69.14, samples=20 00:22:29.610 iops : min= 31, max= 96, avg=55.95, stdev=17.34, samples=20 00:22:29.610 lat (msec) : 250=28.30%, 500=71.70% 00:22:29.610 cpu : usr=98.19%, sys=1.39%, ctx=16, majf=0, minf=53 00:22:29.610 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:22:29.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.610 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.610 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.610 filename0: (groupid=0, jobs=1): err= 0: pid=2697456: Thu Apr 18 13:51:30 2024 00:22:29.610 read: IOPS=63, BW=252KiB/s (258kB/s)(2544KiB/10089msec) 00:22:29.610 slat (usec): min=6, max=102, avg=31.07, stdev=17.99 00:22:29.610 clat (msec): min=134, max=413, avg=252.52, stdev=46.19 00:22:29.610 lat (msec): min=134, max=413, avg=252.55, stdev=46.19 00:22:29.610 clat percentiles (msec): 00:22:29.610 | 1.00th=[ 148], 5.00th=[ 180], 10.00th=[ 199], 20.00th=[ 213], 00:22:29.610 | 30.00th=[ 226], 40.00th=[ 239], 50.00th=[ 251], 60.00th=[ 268], 00:22:29.610 | 70.00th=[ 284], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 330], 00:22:29.610 | 99.00th=[ 363], 99.50th=[ 376], 99.90th=[ 414], 99.95th=[ 414], 00:22:29.610 | 99.99th=[ 414] 00:22:29.610 bw ( KiB/s): min= 128, max= 336, per=4.05%, avg=248.00, stdev=53.32, samples=20 00:22:29.610 iops : min= 32, max= 84, avg=62.00, stdev=13.33, samples=20 00:22:29.610 lat (msec) : 250=50.31%, 500=49.69% 00:22:29.610 cpu : usr=98.26%, sys=1.32%, ctx=18, majf=0, minf=42 00:22:29.610 IO depths : 1=2.7%, 2=7.2%, 4=19.8%, 8=60.4%, 16=9.9%, 32=0.0%, >=64=0.0% 00:22:29.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.610 complete : 0=0.0%, 4=92.6%, 8=1.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.610 issued rwts: total=636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.610 filename0: (groupid=0, jobs=1): err= 0: pid=2697457: Thu Apr 18 13:51:30 2024 00:22:29.610 read: IOPS=57, BW=229KiB/s (234kB/s)(2304KiB/10078msec) 00:22:29.610 slat (usec): min=21, max=104, avg=69.83, stdev=11.36 00:22:29.610 clat (msec): min=191, max=383, avg=279.31, stdev=46.77 00:22:29.610 lat (msec): min=191, max=383, avg=279.38, stdev=46.77 00:22:29.610 clat percentiles (msec): 00:22:29.610 | 1.00th=[ 192], 5.00th=[ 197], 10.00th=[ 203], 20.00th=[ 243], 00:22:29.610 | 30.00th=[ 251], 40.00th=[ 271], 50.00th=[ 284], 60.00th=[ 296], 00:22:29.610 | 70.00th=[ 300], 80.00th=[ 313], 90.00th=[ 342], 95.00th=[ 372], 00:22:29.610 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:22:29.610 | 99.99th=[ 384] 00:22:29.610 bw ( KiB/s): min= 128, max= 384, per=3.65%, avg=224.00, stdev=70.42, samples=20 00:22:29.610 iops : min= 32, max= 96, avg=56.00, stdev=17.60, samples=20 00:22:29.610 lat (msec) : 250=30.38%, 500=69.62% 00:22:29.610 cpu : usr=98.26%, sys=1.33%, ctx=14, majf=0, minf=53 00:22:29.610 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:29.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.610 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.610 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.610 filename0: (groupid=0, jobs=1): err= 0: pid=2697458: Thu Apr 18 13:51:30 2024 00:22:29.610 read: IOPS=58, BW=235KiB/s (240kB/s)(2368KiB/10097msec) 00:22:29.610 slat (usec): min=8, max=105, avg=68.79, stdev=16.97 00:22:29.610 clat (msec): min=145, max=363, avg=272.28, stdev=51.56 00:22:29.610 lat (msec): min=145, max=364, avg=272.35, stdev=51.57 00:22:29.610 clat percentiles (msec): 00:22:29.610 | 1.00th=[ 146], 5.00th=[ 184], 10.00th=[ 199], 20.00th=[ 226], 00:22:29.610 | 30.00th=[ 249], 40.00th=[ 271], 50.00th=[ 284], 60.00th=[ 296], 00:22:29.610 | 70.00th=[ 300], 80.00th=[ 321], 90.00th=[ 342], 95.00th=[ 347], 00:22:29.610 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 363], 00:22:29.610 | 99.99th=[ 363] 00:22:29.610 bw ( KiB/s): min= 128, max= 384, per=3.76%, avg=230.40, stdev=65.95, samples=20 00:22:29.610 iops : min= 32, max= 96, avg=57.60, stdev=16.49, samples=20 00:22:29.610 lat (msec) : 250=31.42%, 500=68.58% 00:22:29.610 cpu : usr=98.17%, sys=1.39%, ctx=80, majf=0, minf=46 00:22:29.610 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:22:29.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.610 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.610 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.610 filename0: (groupid=0, jobs=1): err= 0: pid=2697459: Thu Apr 18 13:51:30 2024 00:22:29.610 read: IOPS=60, BW=240KiB/s (246kB/s)(2424KiB/10085msec) 00:22:29.610 slat (nsec): min=8406, max=82422, avg=21496.64, stdev=14553.30 00:22:29.610 clat (msec): min=128, max=395, avg=265.96, stdev=52.59 00:22:29.610 lat (msec): min=128, max=395, avg=265.98, stdev=52.59 00:22:29.610 clat percentiles (msec): 00:22:29.610 | 1.00th=[ 144], 5.00th=[ 155], 10.00th=[ 194], 20.00th=[ 224], 00:22:29.610 | 30.00th=[ 243], 40.00th=[ 257], 50.00th=[ 275], 60.00th=[ 292], 00:22:29.610 | 70.00th=[ 296], 80.00th=[ 305], 90.00th=[ 342], 95.00th=[ 347], 00:22:29.610 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 397], 99.95th=[ 397], 00:22:29.610 | 99.99th=[ 397] 00:22:29.610 bw ( KiB/s): min= 128, max= 384, per=3.84%, avg=236.00, stdev=75.02, samples=20 00:22:29.610 iops : min= 32, max= 96, avg=59.00, stdev=18.76, samples=20 00:22:29.610 lat (msec) : 250=39.27%, 500=60.73% 00:22:29.610 cpu : usr=98.19%, sys=1.43%, ctx=25, majf=0, minf=51 00:22:29.610 IO depths : 1=4.6%, 2=10.9%, 4=25.1%, 8=51.7%, 16=7.8%, 32=0.0%, >=64=0.0% 00:22:29.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.610 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.610 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.610 filename1: (groupid=0, jobs=1): err= 0: pid=2697460: Thu Apr 18 13:51:30 2024 00:22:29.610 read: IOPS=82, BW=329KiB/s (337kB/s)(3328KiB/10111msec) 00:22:29.610 slat (nsec): min=8113, max=92745, avg=17364.00, stdev=17051.60 00:22:29.611 clat (msec): min=85, max=311, avg=194.19, stdev=31.82 00:22:29.611 lat (msec): min=85, max=311, avg=194.21, stdev=31.82 00:22:29.611 clat percentiles (msec): 00:22:29.611 | 1.00th=[ 86], 5.00th=[ 150], 10.00th=[ 157], 20.00th=[ 174], 00:22:29.611 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 194], 60.00th=[ 207], 00:22:29.611 | 70.00th=[ 218], 80.00th=[ 222], 90.00th=[ 230], 95.00th=[ 243], 00:22:29.611 | 99.00th=[ 253], 99.50th=[ 253], 99.90th=[ 313], 99.95th=[ 313], 00:22:29.611 | 99.99th=[ 313] 00:22:29.611 bw ( KiB/s): min= 256, max= 384, per=5.33%, avg=326.40, stdev=63.87, samples=20 00:22:29.611 iops : min= 64, max= 96, avg=81.60, stdev=15.97, samples=20 00:22:29.611 lat (msec) : 100=1.92%, 250=96.15%, 500=1.92% 00:22:29.611 cpu : usr=97.86%, sys=1.67%, ctx=26, majf=0, minf=47 00:22:29.611 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:22:29.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.611 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.611 issued rwts: total=832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.611 filename1: (groupid=0, jobs=1): err= 0: pid=2697461: Thu Apr 18 13:51:30 2024 00:22:29.611 read: IOPS=60, BW=241KiB/s (247kB/s)(2432KiB/10090msec) 00:22:29.611 slat (nsec): min=6931, max=94960, avg=38520.94, stdev=24302.32 00:22:29.611 clat (msec): min=131, max=399, avg=265.17, stdev=45.86 00:22:29.611 lat (msec): min=131, max=399, avg=265.20, stdev=45.85 00:22:29.611 clat percentiles (msec): 00:22:29.611 | 1.00th=[ 146], 5.00th=[ 192], 10.00th=[ 201], 20.00th=[ 234], 00:22:29.611 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 268], 60.00th=[ 284], 00:22:29.611 | 70.00th=[ 296], 80.00th=[ 305], 90.00th=[ 317], 95.00th=[ 342], 00:22:29.611 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 401], 99.95th=[ 401], 00:22:29.611 | 99.99th=[ 401] 00:22:29.611 bw ( KiB/s): min= 128, max= 256, per=3.86%, avg=236.80, stdev=46.89, samples=20 00:22:29.611 iops : min= 32, max= 64, avg=59.20, stdev=11.72, samples=20 00:22:29.611 lat (msec) : 250=37.83%, 500=62.17% 00:22:29.611 cpu : usr=98.17%, sys=1.43%, ctx=20, majf=0, minf=47 00:22:29.611 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:22:29.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.611 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.611 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.611 filename1: (groupid=0, jobs=1): err= 0: pid=2697462: Thu Apr 18 13:51:30 2024 00:22:29.611 read: IOPS=60, BW=241KiB/s (247kB/s)(2432KiB/10096msec) 00:22:29.611 slat (nsec): min=14846, max=98842, avg=53140.85, stdev=20438.26 00:22:29.611 clat (msec): min=142, max=466, avg=265.26, stdev=50.02 00:22:29.611 lat (msec): min=143, max=466, avg=265.32, stdev=50.02 00:22:29.611 clat percentiles (msec): 00:22:29.611 | 1.00th=[ 176], 5.00th=[ 192], 10.00th=[ 201], 20.00th=[ 224], 00:22:29.611 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 284], 00:22:29.611 | 70.00th=[ 296], 80.00th=[ 300], 90.00th=[ 321], 95.00th=[ 330], 00:22:29.611 | 99.00th=[ 388], 99.50th=[ 435], 99.90th=[ 468], 99.95th=[ 468], 00:22:29.611 | 99.99th=[ 468] 00:22:29.611 bw ( KiB/s): min= 128, max= 368, per=3.86%, avg=236.80, stdev=59.55, samples=20 00:22:29.611 iops : min= 32, max= 92, avg=59.20, stdev=14.89, samples=20 00:22:29.611 lat (msec) : 250=40.13%, 500=59.87% 00:22:29.611 cpu : usr=98.15%, sys=1.37%, ctx=17, majf=0, minf=42 00:22:29.611 IO depths : 1=4.1%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:22:29.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.611 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.611 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.611 filename1: (groupid=0, jobs=1): err= 0: pid=2697463: Thu Apr 18 13:51:30 2024 00:22:29.611 read: IOPS=77, BW=310KiB/s (317kB/s)(3136KiB/10117msec) 00:22:29.611 slat (nsec): min=5515, max=91001, avg=24330.84, stdev=21676.70 00:22:29.611 clat (msec): min=91, max=270, avg=206.16, stdev=34.34 00:22:29.611 lat (msec): min=91, max=270, avg=206.18, stdev=34.35 00:22:29.611 clat percentiles (msec): 00:22:29.611 | 1.00th=[ 91], 5.00th=[ 136], 10.00th=[ 174], 20.00th=[ 180], 00:22:29.611 | 30.00th=[ 192], 40.00th=[ 201], 50.00th=[ 213], 60.00th=[ 220], 00:22:29.611 | 70.00th=[ 226], 80.00th=[ 234], 90.00th=[ 249], 95.00th=[ 257], 00:22:29.611 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:22:29.611 | 99.99th=[ 271] 00:22:29.611 bw ( KiB/s): min= 256, max= 384, per=5.02%, avg=307.20, stdev=58.18, samples=20 00:22:29.611 iops : min= 64, max= 96, avg=76.80, stdev=14.54, samples=20 00:22:29.611 lat (msec) : 100=2.04%, 250=89.29%, 500=8.67% 00:22:29.611 cpu : usr=97.92%, sys=1.46%, ctx=52, majf=0, minf=54 00:22:29.611 IO depths : 1=2.0%, 2=8.3%, 4=25.0%, 8=54.2%, 16=10.5%, 32=0.0%, >=64=0.0% 00:22:29.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.611 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.611 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.611 filename1: (groupid=0, jobs=1): err= 0: pid=2697464: Thu Apr 18 13:51:30 2024 00:22:29.611 read: IOPS=60, BW=243KiB/s (249kB/s)(2456KiB/10089msec) 00:22:29.611 slat (usec): min=8, max=104, avg=33.48, stdev=20.99 00:22:29.611 clat (msec): min=130, max=430, avg=262.06, stdev=50.86 00:22:29.611 lat (msec): min=130, max=430, avg=262.10, stdev=50.87 00:22:29.611 clat percentiles (msec): 00:22:29.611 | 1.00th=[ 165], 5.00th=[ 171], 10.00th=[ 199], 20.00th=[ 213], 00:22:29.611 | 30.00th=[ 234], 40.00th=[ 249], 50.00th=[ 262], 60.00th=[ 284], 00:22:29.611 | 70.00th=[ 296], 80.00th=[ 300], 90.00th=[ 321], 95.00th=[ 342], 00:22:29.611 | 99.00th=[ 380], 99.50th=[ 401], 99.90th=[ 430], 99.95th=[ 430], 00:22:29.611 | 99.99th=[ 430] 00:22:29.611 bw ( KiB/s): min= 128, max= 368, per=3.91%, avg=239.20, stdev=61.53, samples=20 00:22:29.611 iops : min= 32, max= 92, avg=59.80, stdev=15.38, samples=20 00:22:29.611 lat (msec) : 250=41.69%, 500=58.31% 00:22:29.611 cpu : usr=98.34%, sys=1.23%, ctx=31, majf=0, minf=51 00:22:29.611 IO depths : 1=2.6%, 2=8.0%, 4=22.3%, 8=57.2%, 16=9.9%, 32=0.0%, >=64=0.0% 00:22:29.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.611 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.611 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.611 filename1: (groupid=0, jobs=1): err= 0: pid=2697465: Thu Apr 18 13:51:30 2024 00:22:29.611 read: IOPS=63, BW=254KiB/s (260kB/s)(2560KiB/10096msec) 00:22:29.611 slat (usec): min=7, max=108, avg=51.07, stdev=28.59 00:22:29.611 clat (msec): min=129, max=412, avg=251.38, stdev=46.70 00:22:29.611 lat (msec): min=129, max=412, avg=251.44, stdev=46.72 00:22:29.611 clat percentiles (msec): 00:22:29.611 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 192], 20.00th=[ 211], 00:22:29.611 | 30.00th=[ 224], 40.00th=[ 239], 50.00th=[ 249], 60.00th=[ 268], 00:22:29.611 | 70.00th=[ 284], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 326], 00:22:29.611 | 99.00th=[ 363], 99.50th=[ 380], 99.90th=[ 414], 99.95th=[ 414], 00:22:29.611 | 99.99th=[ 414] 00:22:29.611 bw ( KiB/s): min= 128, max= 368, per=4.07%, avg=249.60, stdev=63.45, samples=20 00:22:29.611 iops : min= 32, max= 92, avg=62.40, stdev=15.86, samples=20 00:22:29.611 lat (msec) : 250=51.56%, 500=48.44% 00:22:29.611 cpu : usr=98.34%, sys=1.25%, ctx=9, majf=0, minf=39 00:22:29.611 IO depths : 1=2.3%, 2=7.7%, 4=22.2%, 8=57.7%, 16=10.2%, 32=0.0%, >=64=0.0% 00:22:29.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.611 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.611 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.611 filename1: (groupid=0, jobs=1): err= 0: pid=2697466: Thu Apr 18 13:51:30 2024 00:22:29.611 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10096msec) 00:22:29.611 slat (nsec): min=7404, max=99491, avg=48220.03, stdev=26591.10 00:22:29.611 clat (msec): min=128, max=372, avg=258.46, stdev=51.18 00:22:29.611 lat (msec): min=128, max=372, avg=258.51, stdev=51.18 00:22:29.611 clat percentiles (msec): 00:22:29.611 | 1.00th=[ 148], 5.00th=[ 174], 10.00th=[ 186], 20.00th=[ 207], 00:22:29.611 | 30.00th=[ 234], 40.00th=[ 249], 50.00th=[ 257], 60.00th=[ 284], 00:22:29.611 | 70.00th=[ 296], 80.00th=[ 300], 90.00th=[ 317], 95.00th=[ 342], 00:22:29.611 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 372], 99.95th=[ 372], 00:22:29.611 | 99.99th=[ 372] 00:22:29.611 bw ( KiB/s): min= 128, max= 384, per=3.97%, avg=243.20, stdev=70.72, samples=20 00:22:29.611 iops : min= 32, max= 96, avg=60.80, stdev=17.68, samples=20 00:22:29.611 lat (msec) : 250=42.15%, 500=57.85% 00:22:29.611 cpu : usr=98.14%, sys=1.37%, ctx=23, majf=0, minf=52 00:22:29.611 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:22:29.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.611 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.611 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.612 filename1: (groupid=0, jobs=1): err= 0: pid=2697467: Thu Apr 18 13:51:30 2024 00:22:29.612 read: IOPS=60, BW=241KiB/s (246kB/s)(2432KiB/10111msec) 00:22:29.612 slat (usec): min=3, max=109, avg=65.06, stdev=20.24 00:22:29.612 clat (msec): min=84, max=462, avg=265.55, stdev=66.71 00:22:29.612 lat (msec): min=84, max=462, avg=265.61, stdev=66.72 00:22:29.612 clat percentiles (msec): 00:22:29.612 | 1.00th=[ 85], 5.00th=[ 136], 10.00th=[ 184], 20.00th=[ 209], 00:22:29.612 | 30.00th=[ 234], 40.00th=[ 257], 50.00th=[ 284], 60.00th=[ 292], 00:22:29.612 | 70.00th=[ 300], 80.00th=[ 321], 90.00th=[ 347], 95.00th=[ 347], 00:22:29.612 | 99.00th=[ 414], 99.50th=[ 456], 99.90th=[ 464], 99.95th=[ 464], 00:22:29.612 | 99.99th=[ 464] 00:22:29.612 bw ( KiB/s): min= 128, max= 384, per=3.86%, avg=236.80, stdev=69.76, samples=20 00:22:29.612 iops : min= 32, max= 96, avg=59.20, stdev=17.44, samples=20 00:22:29.612 lat (msec) : 100=2.63%, 250=33.22%, 500=64.14% 00:22:29.612 cpu : usr=98.39%, sys=1.19%, ctx=14, majf=0, minf=64 00:22:29.612 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:22:29.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.612 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.612 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.612 filename2: (groupid=0, jobs=1): err= 0: pid=2697468: Thu Apr 18 13:51:30 2024 00:22:29.612 read: IOPS=63, BW=254KiB/s (260kB/s)(2560KiB/10096msec) 00:22:29.612 slat (usec): min=9, max=145, avg=25.14, stdev= 9.51 00:22:29.612 clat (msec): min=139, max=410, avg=252.16, stdev=51.61 00:22:29.612 lat (msec): min=139, max=410, avg=252.19, stdev=51.61 00:22:29.612 clat percentiles (msec): 00:22:29.612 | 1.00th=[ 146], 5.00th=[ 157], 10.00th=[ 184], 20.00th=[ 201], 00:22:29.612 | 30.00th=[ 226], 40.00th=[ 234], 50.00th=[ 253], 60.00th=[ 271], 00:22:29.612 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 309], 95.00th=[ 326], 00:22:29.612 | 99.00th=[ 384], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:22:29.612 | 99.99th=[ 409] 00:22:29.612 bw ( KiB/s): min= 128, max= 384, per=4.07%, avg=249.60, stdev=62.16, samples=20 00:22:29.612 iops : min= 32, max= 96, avg=62.40, stdev=15.54, samples=20 00:22:29.612 lat (msec) : 250=48.44%, 500=51.56% 00:22:29.612 cpu : usr=97.47%, sys=1.85%, ctx=31, majf=0, minf=48 00:22:29.612 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:22:29.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.612 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.612 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.612 filename2: (groupid=0, jobs=1): err= 0: pid=2697469: Thu Apr 18 13:51:30 2024 00:22:29.612 read: IOPS=57, BW=229KiB/s (234kB/s)(2304KiB/10083msec) 00:22:29.612 slat (usec): min=21, max=101, avg=68.59, stdev=13.63 00:22:29.612 clat (msec): min=141, max=423, avg=279.50, stdev=53.04 00:22:29.612 lat (msec): min=141, max=423, avg=279.57, stdev=53.04 00:22:29.612 clat percentiles (msec): 00:22:29.612 | 1.00th=[ 174], 5.00th=[ 192], 10.00th=[ 203], 20.00th=[ 236], 00:22:29.612 | 30.00th=[ 255], 40.00th=[ 275], 50.00th=[ 284], 60.00th=[ 296], 00:22:29.612 | 70.00th=[ 300], 80.00th=[ 317], 90.00th=[ 363], 95.00th=[ 380], 00:22:29.612 | 99.00th=[ 414], 99.50th=[ 418], 99.90th=[ 422], 99.95th=[ 422], 00:22:29.612 | 99.99th=[ 422] 00:22:29.612 bw ( KiB/s): min= 127, max= 384, per=3.65%, avg=223.95, stdev=66.35, samples=20 00:22:29.612 iops : min= 31, max= 96, avg=55.95, stdev=16.65, samples=20 00:22:29.612 lat (msec) : 250=26.04%, 500=73.96% 00:22:29.612 cpu : usr=98.27%, sys=1.31%, ctx=77, majf=0, minf=44 00:22:29.612 IO depths : 1=4.3%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:22:29.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.612 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.612 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.612 filename2: (groupid=0, jobs=1): err= 0: pid=2697470: Thu Apr 18 13:51:30 2024 00:22:29.612 read: IOPS=58, BW=234KiB/s (240kB/s)(2360KiB/10082msec) 00:22:29.612 slat (nsec): min=8448, max=49553, avg=20394.68, stdev=9199.07 00:22:29.612 clat (msec): min=141, max=486, avg=273.20, stdev=59.46 00:22:29.612 lat (msec): min=141, max=486, avg=273.22, stdev=59.46 00:22:29.612 clat percentiles (msec): 00:22:29.612 | 1.00th=[ 146], 5.00th=[ 182], 10.00th=[ 192], 20.00th=[ 226], 00:22:29.612 | 30.00th=[ 236], 40.00th=[ 271], 50.00th=[ 284], 60.00th=[ 296], 00:22:29.612 | 70.00th=[ 300], 80.00th=[ 321], 90.00th=[ 347], 95.00th=[ 347], 00:22:29.612 | 99.00th=[ 439], 99.50th=[ 464], 99.90th=[ 485], 99.95th=[ 485], 00:22:29.612 | 99.99th=[ 485] 00:22:29.612 bw ( KiB/s): min= 128, max= 384, per=3.74%, avg=229.60, stdev=63.85, samples=20 00:22:29.612 iops : min= 32, max= 96, avg=57.40, stdev=15.96, samples=20 00:22:29.612 lat (msec) : 250=32.20%, 500=67.80% 00:22:29.612 cpu : usr=97.90%, sys=1.62%, ctx=44, majf=0, minf=54 00:22:29.612 IO depths : 1=3.2%, 2=9.5%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:22:29.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.612 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.612 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.612 filename2: (groupid=0, jobs=1): err= 0: pid=2697471: Thu Apr 18 13:51:30 2024 00:22:29.612 read: IOPS=65, BW=263KiB/s (270kB/s)(2656KiB/10088msec) 00:22:29.612 slat (nsec): min=8180, max=80564, avg=23410.79, stdev=13972.28 00:22:29.612 clat (msec): min=129, max=364, avg=241.76, stdev=47.94 00:22:29.612 lat (msec): min=129, max=364, avg=241.78, stdev=47.94 00:22:29.612 clat percentiles (msec): 00:22:29.612 | 1.00th=[ 138], 5.00th=[ 174], 10.00th=[ 192], 20.00th=[ 201], 00:22:29.612 | 30.00th=[ 209], 40.00th=[ 224], 50.00th=[ 234], 60.00th=[ 249], 00:22:29.612 | 70.00th=[ 271], 80.00th=[ 288], 90.00th=[ 300], 95.00th=[ 334], 00:22:29.612 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:22:29.612 | 99.99th=[ 363] 00:22:29.612 bw ( KiB/s): min= 128, max= 368, per=4.23%, avg=259.20, stdev=60.00, samples=20 00:22:29.612 iops : min= 32, max= 92, avg=64.80, stdev=15.00, samples=20 00:22:29.612 lat (msec) : 250=60.54%, 500=39.46% 00:22:29.612 cpu : usr=97.84%, sys=1.57%, ctx=38, majf=0, minf=74 00:22:29.612 IO depths : 1=2.1%, 2=6.0%, 4=17.8%, 8=63.6%, 16=10.5%, 32=0.0%, >=64=0.0% 00:22:29.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.612 complete : 0=0.0%, 4=92.0%, 8=2.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.612 issued rwts: total=664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.612 filename2: (groupid=0, jobs=1): err= 0: pid=2697472: Thu Apr 18 13:51:30 2024 00:22:29.612 read: IOPS=63, BW=253KiB/s (259kB/s)(2560KiB/10106msec) 00:22:29.612 slat (usec): min=16, max=110, avg=49.24, stdev=18.33 00:22:29.612 clat (msec): min=77, max=472, avg=252.26, stdev=63.10 00:22:29.612 lat (msec): min=77, max=472, avg=252.31, stdev=63.10 00:22:29.612 clat percentiles (msec): 00:22:29.612 | 1.00th=[ 78], 5.00th=[ 144], 10.00th=[ 182], 20.00th=[ 199], 00:22:29.612 | 30.00th=[ 224], 40.00th=[ 232], 50.00th=[ 253], 60.00th=[ 284], 00:22:29.612 | 70.00th=[ 292], 80.00th=[ 300], 90.00th=[ 326], 95.00th=[ 347], 00:22:29.612 | 99.00th=[ 401], 99.50th=[ 418], 99.90th=[ 472], 99.95th=[ 472], 00:22:29.612 | 99.99th=[ 472] 00:22:29.612 bw ( KiB/s): min= 128, max= 384, per=4.07%, avg=249.60, stdev=63.87, samples=20 00:22:29.612 iops : min= 32, max= 96, avg=62.40, stdev=15.97, samples=20 00:22:29.612 lat (msec) : 100=2.50%, 250=45.31%, 500=52.19% 00:22:29.612 cpu : usr=97.96%, sys=1.63%, ctx=9, majf=0, minf=63 00:22:29.612 IO depths : 1=3.0%, 2=9.1%, 4=24.4%, 8=54.1%, 16=9.5%, 32=0.0%, >=64=0.0% 00:22:29.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.612 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.612 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.612 filename2: (groupid=0, jobs=1): err= 0: pid=2697473: Thu Apr 18 13:51:30 2024 00:22:29.612 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10096msec) 00:22:29.612 slat (nsec): min=8386, max=81174, avg=25995.10, stdev=15319.90 00:22:29.612 clat (msec): min=134, max=444, avg=246.01, stdev=55.61 00:22:29.612 lat (msec): min=134, max=444, avg=246.04, stdev=55.61 00:22:29.612 clat percentiles (msec): 00:22:29.612 | 1.00th=[ 148], 5.00th=[ 155], 10.00th=[ 174], 20.00th=[ 192], 00:22:29.612 | 30.00th=[ 213], 40.00th=[ 228], 50.00th=[ 247], 60.00th=[ 271], 00:22:29.612 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 342], 00:22:29.612 | 99.00th=[ 347], 99.50th=[ 393], 99.90th=[ 447], 99.95th=[ 447], 00:22:29.612 | 99.99th=[ 447] 00:22:29.612 bw ( KiB/s): min= 128, max= 384, per=4.18%, avg=256.00, stdev=71.93, samples=20 00:22:29.612 iops : min= 32, max= 96, avg=64.00, stdev=17.98, samples=20 00:22:29.612 lat (msec) : 250=51.22%, 500=48.78% 00:22:29.612 cpu : usr=98.07%, sys=1.52%, ctx=25, majf=0, minf=40 00:22:29.612 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:22:29.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.612 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.612 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.612 filename2: (groupid=0, jobs=1): err= 0: pid=2697474: Thu Apr 18 13:51:30 2024 00:22:29.612 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10104msec) 00:22:29.612 slat (nsec): min=6755, max=47817, avg=22115.26, stdev=5515.11 00:22:29.612 clat (msec): min=83, max=406, avg=246.23, stdev=56.47 00:22:29.612 lat (msec): min=83, max=406, avg=246.25, stdev=56.47 00:22:29.612 clat percentiles (msec): 00:22:29.612 | 1.00th=[ 84], 5.00th=[ 144], 10.00th=[ 174], 20.00th=[ 201], 00:22:29.612 | 30.00th=[ 226], 40.00th=[ 236], 50.00th=[ 247], 60.00th=[ 259], 00:22:29.612 | 70.00th=[ 288], 80.00th=[ 296], 90.00th=[ 317], 95.00th=[ 326], 00:22:29.612 | 99.00th=[ 342], 99.50th=[ 393], 99.90th=[ 405], 99.95th=[ 405], 00:22:29.612 | 99.99th=[ 405] 00:22:29.612 bw ( KiB/s): min= 144, max= 384, per=4.18%, avg=256.00, stdev=55.67, samples=20 00:22:29.612 iops : min= 36, max= 96, avg=64.00, stdev=13.92, samples=20 00:22:29.612 lat (msec) : 100=2.44%, 250=49.24%, 500=48.32% 00:22:29.612 cpu : usr=97.75%, sys=1.66%, ctx=14, majf=0, minf=54 00:22:29.612 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:22:29.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.613 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.613 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.613 filename2: (groupid=0, jobs=1): err= 0: pid=2697475: Thu Apr 18 13:51:30 2024 00:22:29.613 read: IOPS=57, BW=229KiB/s (234kB/s)(2304KiB/10080msec) 00:22:29.613 slat (nsec): min=8176, max=93954, avg=33703.57, stdev=25994.05 00:22:29.613 clat (msec): min=190, max=389, avg=279.68, stdev=46.91 00:22:29.613 lat (msec): min=190, max=389, avg=279.71, stdev=46.89 00:22:29.613 clat percentiles (msec): 00:22:29.613 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 205], 20.00th=[ 236], 00:22:29.613 | 30.00th=[ 257], 40.00th=[ 275], 50.00th=[ 284], 60.00th=[ 296], 00:22:29.613 | 70.00th=[ 300], 80.00th=[ 317], 90.00th=[ 330], 95.00th=[ 380], 00:22:29.613 | 99.00th=[ 388], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:22:29.613 | 99.99th=[ 388] 00:22:29.613 bw ( KiB/s): min= 128, max= 384, per=3.65%, avg=224.00, stdev=70.42, samples=20 00:22:29.613 iops : min= 32, max= 96, avg=56.00, stdev=17.60, samples=20 00:22:29.613 lat (msec) : 250=24.48%, 500=75.52% 00:22:29.613 cpu : usr=98.44%, sys=1.13%, ctx=23, majf=0, minf=49 00:22:29.613 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:29.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.613 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.613 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:29.613 00:22:29.613 Run status group 0 (all jobs): 00:22:29.613 READ: bw=6117KiB/s (6264kB/s), 229KiB/s-343KiB/s (234kB/s-352kB/s), io=60.4MiB (63.4MB), run=10075-10117msec 00:22:29.613 13:51:31 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:29.613 13:51:31 -- target/dif.sh@43 -- # local sub 00:22:29.613 13:51:31 -- target/dif.sh@45 -- # for sub in "$@" 00:22:29.613 13:51:31 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:29.613 13:51:31 -- target/dif.sh@36 -- # local sub_id=0 00:22:29.613 13:51:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:29.613 13:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.613 13:51:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.613 13:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.613 13:51:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:29.613 13:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.613 13:51:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.613 13:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.613 13:51:31 -- target/dif.sh@45 -- # for sub in "$@" 00:22:29.613 13:51:31 -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:29.613 13:51:31 -- target/dif.sh@36 -- # local sub_id=1 00:22:29.613 13:51:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:29.613 13:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.613 13:51:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.613 13:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.613 13:51:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:29.613 13:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.613 13:51:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.613 13:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.613 13:51:31 -- target/dif.sh@45 -- # for sub in "$@" 00:22:29.613 13:51:31 -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:29.613 13:51:31 -- target/dif.sh@36 -- # local sub_id=2 00:22:29.613 13:51:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:29.613 13:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.613 13:51:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.613 13:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.613 13:51:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:29.613 13:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.613 13:51:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.613 13:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.613 13:51:31 -- target/dif.sh@115 -- # NULL_DIF=1 00:22:29.613 13:51:31 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:29.613 13:51:31 -- target/dif.sh@115 -- # numjobs=2 00:22:29.613 13:51:31 -- target/dif.sh@115 -- # iodepth=8 00:22:29.613 13:51:31 -- target/dif.sh@115 -- # runtime=5 00:22:29.613 13:51:31 -- target/dif.sh@115 -- # files=1 00:22:29.613 13:51:31 -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:29.613 13:51:31 -- target/dif.sh@28 -- # local sub 00:22:29.613 13:51:31 -- target/dif.sh@30 -- # for sub in "$@" 00:22:29.613 13:51:31 -- target/dif.sh@31 -- # create_subsystem 0 00:22:29.613 13:51:31 -- target/dif.sh@18 -- # local sub_id=0 00:22:29.613 13:51:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:29.613 13:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.613 13:51:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.613 bdev_null0 00:22:29.613 13:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.613 13:51:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:29.613 13:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.613 13:51:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.613 13:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.613 13:51:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:29.613 13:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.613 13:51:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.613 13:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.613 13:51:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:29.613 13:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.613 13:51:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.613 [2024-04-18 13:51:31.228434] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.613 13:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.613 13:51:31 -- target/dif.sh@30 -- # for sub in "$@" 00:22:29.613 13:51:31 -- target/dif.sh@31 -- # create_subsystem 1 00:22:29.613 13:51:31 -- target/dif.sh@18 -- # local sub_id=1 00:22:29.613 13:51:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:29.613 13:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.613 13:51:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.613 bdev_null1 00:22:29.613 13:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.613 13:51:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:29.613 13:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.613 13:51:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.613 13:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.613 13:51:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:29.613 13:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.613 13:51:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.613 13:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.613 13:51:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.613 13:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.613 13:51:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.613 13:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.613 13:51:31 -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:29.613 13:51:31 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:29.613 13:51:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:29.613 13:51:31 -- nvmf/common.sh@521 -- # config=() 00:22:29.613 13:51:31 -- nvmf/common.sh@521 -- # local subsystem config 00:22:29.613 13:51:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:29.613 13:51:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:29.613 { 00:22:29.613 "params": { 00:22:29.613 "name": "Nvme$subsystem", 00:22:29.613 "trtype": "$TEST_TRANSPORT", 00:22:29.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.613 "adrfam": "ipv4", 00:22:29.613 "trsvcid": "$NVMF_PORT", 00:22:29.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.613 "hdgst": ${hdgst:-false}, 00:22:29.613 "ddgst": ${ddgst:-false} 00:22:29.613 }, 00:22:29.613 "method": "bdev_nvme_attach_controller" 00:22:29.613 } 00:22:29.613 EOF 00:22:29.613 )") 00:22:29.613 13:51:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:29.613 13:51:31 -- target/dif.sh@82 -- # gen_fio_conf 00:22:29.613 13:51:31 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:29.613 13:51:31 -- target/dif.sh@54 -- # local file 00:22:29.613 13:51:31 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:29.613 13:51:31 -- target/dif.sh@56 -- # cat 00:22:29.613 13:51:31 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:29.613 13:51:31 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:29.614 13:51:31 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:29.614 13:51:31 -- common/autotest_common.sh@1327 -- # shift 00:22:29.614 13:51:31 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:29.614 13:51:31 -- nvmf/common.sh@543 -- # cat 00:22:29.614 13:51:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:29.614 13:51:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:29.614 13:51:31 -- target/dif.sh@72 -- # (( file <= files )) 00:22:29.614 13:51:31 -- target/dif.sh@73 -- # cat 00:22:29.614 13:51:31 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:29.614 13:51:31 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:29.614 13:51:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:29.614 13:51:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:29.614 13:51:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:29.614 { 00:22:29.614 "params": { 00:22:29.614 "name": "Nvme$subsystem", 00:22:29.614 "trtype": "$TEST_TRANSPORT", 00:22:29.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.614 "adrfam": "ipv4", 00:22:29.614 "trsvcid": "$NVMF_PORT", 00:22:29.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.614 "hdgst": ${hdgst:-false}, 00:22:29.614 "ddgst": ${ddgst:-false} 00:22:29.614 }, 00:22:29.614 "method": "bdev_nvme_attach_controller" 00:22:29.614 } 00:22:29.614 EOF 00:22:29.614 )") 00:22:29.614 13:51:31 -- target/dif.sh@72 -- # (( file++ )) 00:22:29.614 13:51:31 -- nvmf/common.sh@543 -- # cat 00:22:29.614 13:51:31 -- target/dif.sh@72 -- # (( file <= files )) 00:22:29.614 13:51:31 -- nvmf/common.sh@545 -- # jq . 00:22:29.614 13:51:31 -- nvmf/common.sh@546 -- # IFS=, 00:22:29.614 13:51:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:29.614 "params": { 00:22:29.614 "name": "Nvme0", 00:22:29.614 "trtype": "tcp", 00:22:29.614 "traddr": "10.0.0.2", 00:22:29.614 "adrfam": "ipv4", 00:22:29.614 "trsvcid": "4420", 00:22:29.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:29.614 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:29.614 "hdgst": false, 00:22:29.614 "ddgst": false 00:22:29.614 }, 00:22:29.614 "method": "bdev_nvme_attach_controller" 00:22:29.614 },{ 00:22:29.614 "params": { 00:22:29.614 "name": "Nvme1", 00:22:29.614 "trtype": "tcp", 00:22:29.614 "traddr": "10.0.0.2", 00:22:29.614 "adrfam": "ipv4", 00:22:29.614 "trsvcid": "4420", 00:22:29.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.614 "hdgst": false, 00:22:29.614 "ddgst": false 00:22:29.614 }, 00:22:29.614 "method": "bdev_nvme_attach_controller" 00:22:29.614 }' 00:22:29.614 13:51:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:29.614 13:51:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:29.614 13:51:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:29.614 13:51:31 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:29.614 13:51:31 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:29.614 13:51:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:29.614 13:51:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:29.614 13:51:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:29.614 13:51:31 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:29.614 13:51:31 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:29.614 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:29.614 ... 00:22:29.614 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:29.614 ... 00:22:29.614 fio-3.35 00:22:29.614 Starting 4 threads 00:22:29.614 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.872 00:22:34.872 filename0: (groupid=0, jobs=1): err= 0: pid=2698979: Thu Apr 18 13:51:37 2024 00:22:34.872 read: IOPS=1854, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5002msec) 00:22:34.872 slat (nsec): min=6645, max=55105, avg=13921.88, stdev=6773.38 00:22:34.872 clat (usec): min=953, max=8089, avg=4268.66, stdev=681.22 00:22:34.872 lat (usec): min=974, max=8107, avg=4282.58, stdev=681.57 00:22:34.872 clat percentiles (usec): 00:22:34.872 | 1.00th=[ 2638], 5.00th=[ 3195], 10.00th=[ 3490], 20.00th=[ 3818], 00:22:34.872 | 30.00th=[ 3982], 40.00th=[ 4178], 50.00th=[ 4359], 60.00th=[ 4490], 00:22:34.872 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 5342], 00:22:34.872 | 99.00th=[ 6718], 99.50th=[ 7177], 99.90th=[ 7767], 99.95th=[ 8029], 00:22:34.872 | 99.99th=[ 8094] 00:22:34.872 bw ( KiB/s): min=13920, max=16576, per=25.32%, avg=14836.70, stdev=983.13, samples=10 00:22:34.872 iops : min= 1740, max= 2072, avg=1854.50, stdev=122.82, samples=10 00:22:34.872 lat (usec) : 1000=0.01% 00:22:34.872 lat (msec) : 2=0.19%, 4=30.29%, 10=69.50% 00:22:34.872 cpu : usr=94.36%, sys=4.92%, ctx=51, majf=0, minf=0 00:22:34.872 IO depths : 1=0.2%, 2=9.4%, 4=62.8%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:34.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.872 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.872 issued rwts: total=9276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.872 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:34.872 filename0: (groupid=0, jobs=1): err= 0: pid=2698980: Thu Apr 18 13:51:37 2024 00:22:34.872 read: IOPS=1811, BW=14.2MiB/s (14.8MB/s)(70.8MiB/5002msec) 00:22:34.872 slat (nsec): min=6436, max=65412, avg=17009.34, stdev=8141.88 00:22:34.872 clat (usec): min=770, max=8785, avg=4357.24, stdev=710.64 00:22:34.872 lat (usec): min=788, max=8800, avg=4374.25, stdev=710.84 00:22:34.872 clat percentiles (usec): 00:22:34.872 | 1.00th=[ 2606], 5.00th=[ 3392], 10.00th=[ 3654], 20.00th=[ 3884], 00:22:34.872 | 30.00th=[ 4080], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4490], 00:22:34.872 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4948], 95.00th=[ 5669], 00:22:34.872 | 99.00th=[ 6915], 99.50th=[ 7373], 99.90th=[ 8356], 99.95th=[ 8455], 00:22:34.872 | 99.99th=[ 8848] 00:22:34.872 bw ( KiB/s): min=13632, max=15648, per=24.73%, avg=14490.90, stdev=742.93, samples=10 00:22:34.872 iops : min= 1704, max= 1956, avg=1811.30, stdev=92.82, samples=10 00:22:34.872 lat (usec) : 1000=0.06% 00:22:34.872 lat (msec) : 2=0.36%, 4=26.06%, 10=73.52% 00:22:34.872 cpu : usr=94.64%, sys=4.74%, ctx=9, majf=0, minf=9 00:22:34.872 IO depths : 1=0.1%, 2=12.1%, 4=60.4%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:34.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.872 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.872 issued rwts: total=9062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.872 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:34.872 filename1: (groupid=0, jobs=1): err= 0: pid=2698981: Thu Apr 18 13:51:37 2024 00:22:34.872 read: IOPS=1846, BW=14.4MiB/s (15.1MB/s)(72.2MiB/5004msec) 00:22:34.872 slat (nsec): min=4939, max=65381, avg=16212.35, stdev=8329.64 00:22:34.872 clat (usec): min=853, max=7723, avg=4276.54, stdev=632.68 00:22:34.872 lat (usec): min=866, max=7737, avg=4292.75, stdev=633.36 00:22:34.872 clat percentiles (usec): 00:22:34.872 | 1.00th=[ 2606], 5.00th=[ 3326], 10.00th=[ 3589], 20.00th=[ 3851], 00:22:34.872 | 30.00th=[ 4015], 40.00th=[ 4178], 50.00th=[ 4359], 60.00th=[ 4424], 00:22:34.872 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 5342], 00:22:34.872 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 7701], 00:22:34.872 | 99.99th=[ 7701] 00:22:34.872 bw ( KiB/s): min=13952, max=15680, per=25.21%, avg=14771.20, stdev=562.85, samples=10 00:22:34.872 iops : min= 1744, max= 1960, avg=1846.40, stdev=70.36, samples=10 00:22:34.872 lat (usec) : 1000=0.03% 00:22:34.872 lat (msec) : 2=0.26%, 4=29.06%, 10=70.65% 00:22:34.872 cpu : usr=94.64%, sys=4.48%, ctx=119, majf=0, minf=0 00:22:34.873 IO depths : 1=0.1%, 2=14.0%, 4=58.8%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:34.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.873 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.873 issued rwts: total=9239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.873 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:34.873 filename1: (groupid=0, jobs=1): err= 0: pid=2698982: Thu Apr 18 13:51:37 2024 00:22:34.873 read: IOPS=1813, BW=14.2MiB/s (14.9MB/s)(70.9MiB/5003msec) 00:22:34.873 slat (nsec): min=6327, max=55722, avg=17024.10, stdev=7572.05 00:22:34.873 clat (usec): min=847, max=8329, avg=4353.33, stdev=630.99 00:22:34.873 lat (usec): min=867, max=8339, avg=4370.35, stdev=631.56 00:22:34.873 clat percentiles (usec): 00:22:34.873 | 1.00th=[ 2933], 5.00th=[ 3556], 10.00th=[ 3720], 20.00th=[ 3916], 00:22:34.873 | 30.00th=[ 4080], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4490], 00:22:34.873 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 5407], 00:22:34.873 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 7832], 99.95th=[ 8160], 00:22:34.873 | 99.99th=[ 8356] 00:22:34.873 bw ( KiB/s): min=13424, max=16048, per=24.76%, avg=14505.60, stdev=785.84, samples=10 00:22:34.873 iops : min= 1678, max= 2006, avg=1813.20, stdev=98.23, samples=10 00:22:34.873 lat (usec) : 1000=0.02% 00:22:34.873 lat (msec) : 2=0.33%, 4=24.26%, 10=75.38% 00:22:34.873 cpu : usr=94.18%, sys=5.06%, ctx=69, majf=0, minf=0 00:22:34.873 IO depths : 1=0.1%, 2=11.3%, 4=61.6%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:34.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.873 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.873 issued rwts: total=9071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.873 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:34.873 00:22:34.873 Run status group 0 (all jobs): 00:22:34.873 READ: bw=57.2MiB/s (60.0MB/s), 14.2MiB/s-14.5MiB/s (14.8MB/s-15.2MB/s), io=286MiB (300MB), run=5002-5004msec 00:22:34.873 13:51:37 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:34.873 13:51:37 -- target/dif.sh@43 -- # local sub 00:22:34.873 13:51:37 -- target/dif.sh@45 -- # for sub in "$@" 00:22:34.873 13:51:37 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:34.873 13:51:37 -- target/dif.sh@36 -- # local sub_id=0 00:22:34.873 13:51:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:34.873 13:51:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.873 13:51:37 -- common/autotest_common.sh@10 -- # set +x 00:22:34.873 13:51:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.873 13:51:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:34.873 13:51:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.873 13:51:37 -- common/autotest_common.sh@10 -- # set +x 00:22:34.873 13:51:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.873 13:51:37 -- target/dif.sh@45 -- # for sub in "$@" 00:22:34.873 13:51:37 -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:34.873 13:51:37 -- target/dif.sh@36 -- # local sub_id=1 00:22:34.873 13:51:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:34.873 13:51:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.873 13:51:37 -- common/autotest_common.sh@10 -- # set +x 00:22:34.873 13:51:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.873 13:51:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:34.873 13:51:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.873 13:51:37 -- common/autotest_common.sh@10 -- # set +x 00:22:34.873 13:51:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.873 00:22:34.873 real 0m24.502s 00:22:34.873 user 4m34.880s 00:22:34.873 sys 0m6.595s 00:22:34.873 13:51:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:34.873 13:51:37 -- common/autotest_common.sh@10 -- # set +x 00:22:34.873 ************************************ 00:22:34.873 END TEST fio_dif_rand_params 00:22:34.873 ************************************ 00:22:34.873 13:51:37 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:34.873 13:51:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:34.873 13:51:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:34.873 13:51:37 -- common/autotest_common.sh@10 -- # set +x 00:22:35.133 ************************************ 00:22:35.133 START TEST fio_dif_digest 00:22:35.133 ************************************ 00:22:35.133 13:51:37 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:22:35.133 13:51:37 -- target/dif.sh@123 -- # local NULL_DIF 00:22:35.133 13:51:37 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:35.133 13:51:37 -- target/dif.sh@125 -- # local hdgst ddgst 00:22:35.133 13:51:37 -- target/dif.sh@127 -- # NULL_DIF=3 00:22:35.133 13:51:37 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:35.133 13:51:37 -- target/dif.sh@127 -- # numjobs=3 00:22:35.133 13:51:37 -- target/dif.sh@127 -- # iodepth=3 00:22:35.133 13:51:37 -- target/dif.sh@127 -- # runtime=10 00:22:35.133 13:51:37 -- target/dif.sh@128 -- # hdgst=true 00:22:35.133 13:51:37 -- target/dif.sh@128 -- # ddgst=true 00:22:35.133 13:51:37 -- target/dif.sh@130 -- # create_subsystems 0 00:22:35.133 13:51:37 -- target/dif.sh@28 -- # local sub 00:22:35.133 13:51:37 -- target/dif.sh@30 -- # for sub in "$@" 00:22:35.133 13:51:37 -- target/dif.sh@31 -- # create_subsystem 0 00:22:35.133 13:51:37 -- target/dif.sh@18 -- # local sub_id=0 00:22:35.133 13:51:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:35.133 13:51:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.133 13:51:37 -- common/autotest_common.sh@10 -- # set +x 00:22:35.133 bdev_null0 00:22:35.133 13:51:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.133 13:51:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:35.133 13:51:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.133 13:51:37 -- common/autotest_common.sh@10 -- # set +x 00:22:35.133 13:51:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.133 13:51:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:35.133 13:51:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.133 13:51:37 -- common/autotest_common.sh@10 -- # set +x 00:22:35.133 13:51:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.133 13:51:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:35.133 13:51:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.133 13:51:37 -- common/autotest_common.sh@10 -- # set +x 00:22:35.133 [2024-04-18 13:51:37.747169] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.133 13:51:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.133 13:51:37 -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:35.133 13:51:37 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:35.133 13:51:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:35.133 13:51:37 -- nvmf/common.sh@521 -- # config=() 00:22:35.133 13:51:37 -- nvmf/common.sh@521 -- # local subsystem config 00:22:35.133 13:51:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:35.133 13:51:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:35.133 { 00:22:35.133 "params": { 00:22:35.133 "name": "Nvme$subsystem", 00:22:35.133 "trtype": "$TEST_TRANSPORT", 00:22:35.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.133 "adrfam": "ipv4", 00:22:35.133 "trsvcid": "$NVMF_PORT", 00:22:35.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.133 "hdgst": ${hdgst:-false}, 00:22:35.133 "ddgst": ${ddgst:-false} 00:22:35.133 }, 00:22:35.133 "method": "bdev_nvme_attach_controller" 00:22:35.133 } 00:22:35.133 EOF 00:22:35.133 )") 00:22:35.133 13:51:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:35.133 13:51:37 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:35.133 13:51:37 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:35.133 13:51:37 -- target/dif.sh@82 -- # gen_fio_conf 00:22:35.133 13:51:37 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:35.133 13:51:37 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:35.133 13:51:37 -- target/dif.sh@54 -- # local file 00:22:35.133 13:51:37 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:35.133 13:51:37 -- common/autotest_common.sh@1327 -- # shift 00:22:35.133 13:51:37 -- target/dif.sh@56 -- # cat 00:22:35.133 13:51:37 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:35.133 13:51:37 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:35.133 13:51:37 -- nvmf/common.sh@543 -- # cat 00:22:35.133 13:51:37 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:35.133 13:51:37 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:35.133 13:51:37 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:35.133 13:51:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:35.133 13:51:37 -- target/dif.sh@72 -- # (( file <= files )) 00:22:35.133 13:51:37 -- nvmf/common.sh@545 -- # jq . 00:22:35.133 13:51:37 -- nvmf/common.sh@546 -- # IFS=, 00:22:35.133 13:51:37 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:35.133 "params": { 00:22:35.133 "name": "Nvme0", 00:22:35.133 "trtype": "tcp", 00:22:35.133 "traddr": "10.0.0.2", 00:22:35.133 "adrfam": "ipv4", 00:22:35.133 "trsvcid": "4420", 00:22:35.133 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:35.133 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:35.133 "hdgst": true, 00:22:35.133 "ddgst": true 00:22:35.133 }, 00:22:35.133 "method": "bdev_nvme_attach_controller" 00:22:35.133 }' 00:22:35.133 13:51:37 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:35.133 13:51:37 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:35.133 13:51:37 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:35.133 13:51:37 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:35.133 13:51:37 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:35.133 13:51:37 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:35.133 13:51:37 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:35.133 13:51:37 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:35.133 13:51:37 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:35.133 13:51:37 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:35.391 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:35.391 ... 00:22:35.391 fio-3.35 00:22:35.391 Starting 3 threads 00:22:35.391 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.585 00:22:47.585 filename0: (groupid=0, jobs=1): err= 0: pid=2699748: Thu Apr 18 13:51:48 2024 00:22:47.585 read: IOPS=204, BW=25.5MiB/s (26.8MB/s)(257MiB/10045msec) 00:22:47.585 slat (nsec): min=7854, max=41624, avg=16817.13, stdev=5087.55 00:22:47.585 clat (usec): min=8395, max=59111, avg=14636.50, stdev=2500.70 00:22:47.585 lat (usec): min=8408, max=59145, avg=14653.32, stdev=2500.96 00:22:47.585 clat percentiles (usec): 00:22:47.585 | 1.00th=[ 9503], 5.00th=[11600], 10.00th=[12911], 20.00th=[13698], 00:22:47.585 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[15008], 00:22:47.585 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:22:47.585 | 99.00th=[17695], 99.50th=[18220], 99.90th=[58983], 99.95th=[58983], 00:22:47.585 | 99.99th=[58983] 00:22:47.585 bw ( KiB/s): min=23296, max=29184, per=33.13%, avg=26252.80, stdev=1332.74, samples=20 00:22:47.585 iops : min= 182, max= 228, avg=205.10, stdev=10.41, samples=20 00:22:47.585 lat (msec) : 10=2.34%, 20=97.42%, 50=0.10%, 100=0.15% 00:22:47.585 cpu : usr=93.37%, sys=6.13%, ctx=22, majf=0, minf=88 00:22:47.585 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:47.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.585 issued rwts: total=2053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:47.585 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:47.585 filename0: (groupid=0, jobs=1): err= 0: pid=2699749: Thu Apr 18 13:51:48 2024 00:22:47.585 read: IOPS=210, BW=26.3MiB/s (27.5MB/s)(264MiB/10047msec) 00:22:47.585 slat (nsec): min=4741, max=51103, avg=17051.20, stdev=5342.07 00:22:47.585 clat (usec): min=7936, max=54422, avg=14236.63, stdev=1947.77 00:22:47.585 lat (usec): min=7951, max=54431, avg=14253.68, stdev=1947.89 00:22:47.585 clat percentiles (usec): 00:22:47.585 | 1.00th=[ 9110], 5.00th=[10814], 10.00th=[12649], 20.00th=[13435], 00:22:47.585 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:22:47.585 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:22:47.585 | 99.00th=[17171], 99.50th=[17695], 99.90th=[20055], 99.95th=[52167], 00:22:47.585 | 99.99th=[54264] 00:22:47.585 bw ( KiB/s): min=25344, max=29184, per=34.05%, avg=26985.05, stdev=1018.38, samples=20 00:22:47.585 iops : min= 198, max= 228, avg=210.80, stdev= 7.96, samples=20 00:22:47.585 lat (msec) : 10=3.51%, 20=96.35%, 50=0.05%, 100=0.09% 00:22:47.585 cpu : usr=92.93%, sys=6.57%, ctx=25, majf=0, minf=155 00:22:47.585 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:47.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.585 issued rwts: total=2111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:47.585 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:47.585 filename0: (groupid=0, jobs=1): err= 0: pid=2699750: Thu Apr 18 13:51:48 2024 00:22:47.585 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(257MiB/10046msec) 00:22:47.585 slat (nsec): min=4622, max=96672, avg=19513.96, stdev=5356.75 00:22:47.585 clat (usec): min=8951, max=57697, avg=14613.23, stdev=4960.17 00:22:47.585 lat (usec): min=8964, max=57715, avg=14632.75, stdev=4960.44 00:22:47.585 clat percentiles (usec): 00:22:47.585 | 1.00th=[11338], 5.00th=[12256], 10.00th=[12649], 20.00th=[13173], 00:22:47.585 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:22:47.585 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15533], 95.00th=[15926], 00:22:47.585 | 99.00th=[54264], 99.50th=[55313], 99.90th=[57410], 99.95th=[57410], 00:22:47.585 | 99.99th=[57934] 00:22:47.585 bw ( KiB/s): min=22528, max=27904, per=33.18%, avg=26291.20, stdev=1284.30, samples=20 00:22:47.585 iops : min= 176, max= 218, avg=205.40, stdev=10.03, samples=20 00:22:47.585 lat (msec) : 10=0.29%, 20=98.30%, 100=1.41% 00:22:47.585 cpu : usr=93.66%, sys=5.83%, ctx=24, majf=0, minf=162 00:22:47.585 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:47.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.585 issued rwts: total=2056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:47.585 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:47.585 00:22:47.585 Run status group 0 (all jobs): 00:22:47.585 READ: bw=77.4MiB/s (81.1MB/s), 25.5MiB/s-26.3MiB/s (26.8MB/s-27.5MB/s), io=778MiB (815MB), run=10045-10047msec 00:22:47.585 13:51:48 -- target/dif.sh@132 -- # destroy_subsystems 0 00:22:47.585 13:51:48 -- target/dif.sh@43 -- # local sub 00:22:47.585 13:51:48 -- target/dif.sh@45 -- # for sub in "$@" 00:22:47.585 13:51:48 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:47.585 13:51:48 -- target/dif.sh@36 -- # local sub_id=0 00:22:47.585 13:51:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:47.585 13:51:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.585 13:51:48 -- common/autotest_common.sh@10 -- # set +x 00:22:47.585 13:51:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.585 13:51:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:47.585 13:51:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.585 13:51:48 -- common/autotest_common.sh@10 -- # set +x 00:22:47.585 13:51:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.585 00:22:47.585 real 0m11.146s 00:22:47.585 user 0m29.276s 00:22:47.585 sys 0m2.171s 00:22:47.585 13:51:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:47.585 13:51:48 -- common/autotest_common.sh@10 -- # set +x 00:22:47.585 ************************************ 00:22:47.585 END TEST fio_dif_digest 00:22:47.585 ************************************ 00:22:47.585 13:51:48 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:47.585 13:51:48 -- target/dif.sh@147 -- # nvmftestfini 00:22:47.585 13:51:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:47.585 13:51:48 -- nvmf/common.sh@117 -- # sync 00:22:47.585 13:51:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:47.585 13:51:48 -- nvmf/common.sh@120 -- # set +e 00:22:47.585 13:51:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:47.586 13:51:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:47.586 rmmod nvme_tcp 00:22:47.586 rmmod nvme_fabrics 00:22:47.586 rmmod nvme_keyring 00:22:47.586 13:51:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:47.586 13:51:48 -- nvmf/common.sh@124 -- # set -e 00:22:47.586 13:51:48 -- nvmf/common.sh@125 -- # return 0 00:22:47.586 13:51:48 -- nvmf/common.sh@478 -- # '[' -n 2692916 ']' 00:22:47.586 13:51:48 -- nvmf/common.sh@479 -- # killprocess 2692916 00:22:47.586 13:51:48 -- common/autotest_common.sh@936 -- # '[' -z 2692916 ']' 00:22:47.586 13:51:48 -- common/autotest_common.sh@940 -- # kill -0 2692916 00:22:47.586 13:51:48 -- common/autotest_common.sh@941 -- # uname 00:22:47.586 13:51:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:47.586 13:51:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2692916 00:22:47.586 13:51:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:47.586 13:51:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:47.586 13:51:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2692916' 00:22:47.586 killing process with pid 2692916 00:22:47.586 13:51:48 -- common/autotest_common.sh@955 -- # kill 2692916 00:22:47.586 13:51:48 -- common/autotest_common.sh@960 -- # wait 2692916 00:22:47.586 13:51:49 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:22:47.586 13:51:49 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:47.586 Waiting for block devices as requested 00:22:47.586 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:22:47.843 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:47.843 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:47.843 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:47.843 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:48.101 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:48.101 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:48.101 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:48.101 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:48.101 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:48.359 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:48.359 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:48.360 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:48.618 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:48.618 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:48.618 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:48.618 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:48.903 13:51:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:48.903 13:51:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:48.903 13:51:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:48.903 13:51:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:48.903 13:51:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.903 13:51:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:48.903 13:51:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.805 13:51:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:50.805 00:22:50.805 real 1m7.283s 00:22:50.805 user 6m32.042s 00:22:50.805 sys 0m18.755s 00:22:50.806 13:51:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:50.806 13:51:53 -- common/autotest_common.sh@10 -- # set +x 00:22:50.806 ************************************ 00:22:50.806 END TEST nvmf_dif 00:22:50.806 ************************************ 00:22:50.806 13:51:53 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:50.806 13:51:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:50.806 13:51:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:50.806 13:51:53 -- common/autotest_common.sh@10 -- # set +x 00:22:51.063 ************************************ 00:22:51.063 START TEST nvmf_abort_qd_sizes 00:22:51.063 ************************************ 00:22:51.063 13:51:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:51.063 * Looking for test storage... 00:22:51.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:51.063 13:51:53 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.063 13:51:53 -- nvmf/common.sh@7 -- # uname -s 00:22:51.063 13:51:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.063 13:51:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.063 13:51:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.063 13:51:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.063 13:51:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.063 13:51:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.063 13:51:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.063 13:51:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.063 13:51:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.063 13:51:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.063 13:51:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:51.063 13:51:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:51.063 13:51:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.063 13:51:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.063 13:51:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.063 13:51:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.063 13:51:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.063 13:51:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.063 13:51:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.063 13:51:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.063 13:51:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.063 13:51:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.063 13:51:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.063 13:51:53 -- paths/export.sh@5 -- # export PATH 00:22:51.063 13:51:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.063 13:51:53 -- nvmf/common.sh@47 -- # : 0 00:22:51.063 13:51:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:51.063 13:51:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:51.063 13:51:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.063 13:51:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.063 13:51:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.063 13:51:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:51.063 13:51:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:51.063 13:51:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:51.063 13:51:53 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:22:51.063 13:51:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:51.063 13:51:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.063 13:51:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:51.063 13:51:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:51.063 13:51:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:51.063 13:51:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.063 13:51:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:51.063 13:51:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.063 13:51:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:51.063 13:51:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:51.063 13:51:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:51.063 13:51:53 -- common/autotest_common.sh@10 -- # set +x 00:22:52.969 13:51:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:52.969 13:51:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:52.969 13:51:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:52.969 13:51:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:52.969 13:51:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:52.969 13:51:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:52.969 13:51:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:52.969 13:51:55 -- nvmf/common.sh@295 -- # net_devs=() 00:22:52.969 13:51:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:52.969 13:51:55 -- nvmf/common.sh@296 -- # e810=() 00:22:52.969 13:51:55 -- nvmf/common.sh@296 -- # local -ga e810 00:22:52.969 13:51:55 -- nvmf/common.sh@297 -- # x722=() 00:22:52.969 13:51:55 -- nvmf/common.sh@297 -- # local -ga x722 00:22:52.969 13:51:55 -- nvmf/common.sh@298 -- # mlx=() 00:22:52.969 13:51:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:52.969 13:51:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.969 13:51:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.969 13:51:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.969 13:51:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.969 13:51:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.969 13:51:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.969 13:51:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.969 13:51:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.969 13:51:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.969 13:51:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.969 13:51:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.969 13:51:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:52.969 13:51:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:52.969 13:51:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:52.969 13:51:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:52.969 13:51:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:52.969 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:52.969 13:51:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:52.969 13:51:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:52.969 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:52.969 13:51:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:52.969 13:51:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:52.969 13:51:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.969 13:51:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:52.969 13:51:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.969 13:51:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:52.969 Found net devices under 0000:84:00.0: cvl_0_0 00:22:52.969 13:51:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.969 13:51:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:52.969 13:51:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.969 13:51:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:52.969 13:51:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.969 13:51:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:52.969 Found net devices under 0000:84:00.1: cvl_0_1 00:22:52.969 13:51:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.969 13:51:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:52.969 13:51:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:52.969 13:51:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:52.969 13:51:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:52.969 13:51:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.969 13:51:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.969 13:51:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.969 13:51:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:52.969 13:51:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.969 13:51:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.969 13:51:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:52.969 13:51:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.970 13:51:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.970 13:51:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:52.970 13:51:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:52.970 13:51:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.970 13:51:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.970 13:51:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.970 13:51:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.970 13:51:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:52.970 13:51:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.970 13:51:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.970 13:51:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.970 13:51:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:52.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:22:52.970 00:22:52.970 --- 10.0.0.2 ping statistics --- 00:22:52.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.970 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:22:52.970 13:51:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:22:52.970 00:22:52.970 --- 10.0.0.1 ping statistics --- 00:22:52.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.970 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:22:52.970 13:51:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.970 13:51:55 -- nvmf/common.sh@411 -- # return 0 00:22:52.970 13:51:55 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:22:52.970 13:51:55 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:53.905 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:53.905 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:54.164 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:54.164 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:54.164 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:54.164 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:54.164 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:54.164 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:54.164 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:54.164 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:54.164 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:54.164 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:54.164 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:54.164 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:54.164 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:54.164 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:55.098 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:22:55.098 13:51:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.098 13:51:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:55.098 13:51:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:55.098 13:51:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.098 13:51:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:55.098 13:51:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:55.098 13:51:57 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:22:55.098 13:51:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:55.098 13:51:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:55.098 13:51:57 -- common/autotest_common.sh@10 -- # set +x 00:22:55.098 13:51:57 -- nvmf/common.sh@470 -- # nvmfpid=2704577 00:22:55.098 13:51:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:22:55.098 13:51:57 -- nvmf/common.sh@471 -- # waitforlisten 2704577 00:22:55.098 13:51:57 -- common/autotest_common.sh@817 -- # '[' -z 2704577 ']' 00:22:55.098 13:51:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.098 13:51:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:55.098 13:51:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.098 13:51:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:55.098 13:51:57 -- common/autotest_common.sh@10 -- # set +x 00:22:55.357 [2024-04-18 13:51:57.926100] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:22:55.357 [2024-04-18 13:51:57.926169] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.357 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.357 [2024-04-18 13:51:57.990234] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.357 [2024-04-18 13:51:58.099969] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.357 [2024-04-18 13:51:58.100030] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.357 [2024-04-18 13:51:58.100044] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.357 [2024-04-18 13:51:58.100055] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.357 [2024-04-18 13:51:58.100064] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.357 [2024-04-18 13:51:58.100121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.357 [2024-04-18 13:51:58.100187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.357 [2024-04-18 13:51:58.100247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.357 [2024-04-18 13:51:58.100251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.616 13:51:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:55.616 13:51:58 -- common/autotest_common.sh@850 -- # return 0 00:22:55.616 13:51:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:55.616 13:51:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:55.616 13:51:58 -- common/autotest_common.sh@10 -- # set +x 00:22:55.616 13:51:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.616 13:51:58 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:22:55.616 13:51:58 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:22:55.616 13:51:58 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:22:55.616 13:51:58 -- scripts/common.sh@309 -- # local bdf bdfs 00:22:55.616 13:51:58 -- scripts/common.sh@310 -- # local nvmes 00:22:55.616 13:51:58 -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:22:55.616 13:51:58 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:22:55.616 13:51:58 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:22:55.616 13:51:58 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:22:55.616 13:51:58 -- scripts/common.sh@320 -- # uname -s 00:22:55.616 13:51:58 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:22:55.616 13:51:58 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:22:55.616 13:51:58 -- scripts/common.sh@325 -- # (( 1 )) 00:22:55.616 13:51:58 -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:22:55.616 13:51:58 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:22:55.616 13:51:58 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:22:55.616 13:51:58 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:22:55.616 13:51:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:55.616 13:51:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:55.616 13:51:58 -- common/autotest_common.sh@10 -- # set +x 00:22:55.616 ************************************ 00:22:55.616 START TEST spdk_target_abort 00:22:55.616 ************************************ 00:22:55.616 13:51:58 -- common/autotest_common.sh@1111 -- # spdk_target 00:22:55.616 13:51:58 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:22:55.616 13:51:58 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:22:55.616 13:51:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:55.616 13:51:58 -- common/autotest_common.sh@10 -- # set +x 00:22:58.906 spdk_targetn1 00:22:58.906 13:52:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.906 13:52:01 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.906 13:52:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.906 13:52:01 -- common/autotest_common.sh@10 -- # set +x 00:22:58.906 [2024-04-18 13:52:01.208452] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.906 13:52:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:22:58.907 13:52:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.907 13:52:01 -- common/autotest_common.sh@10 -- # set +x 00:22:58.907 13:52:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:22:58.907 13:52:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.907 13:52:01 -- common/autotest_common.sh@10 -- # set +x 00:22:58.907 13:52:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:22:58.907 13:52:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.907 13:52:01 -- common/autotest_common.sh@10 -- # set +x 00:22:58.907 [2024-04-18 13:52:01.240728] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.907 13:52:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:58.907 13:52:01 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:58.907 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.189 Initializing NVMe Controllers 00:23:02.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:02.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:02.189 Initialization complete. Launching workers. 00:23:02.189 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9948, failed: 0 00:23:02.189 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1380, failed to submit 8568 00:23:02.189 success 751, unsuccess 629, failed 0 00:23:02.189 13:52:04 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:02.189 13:52:04 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:02.189 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.476 Initializing NVMe Controllers 00:23:05.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:05.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:05.476 Initialization complete. Launching workers. 00:23:05.476 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8579, failed: 0 00:23:05.476 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1238, failed to submit 7341 00:23:05.476 success 330, unsuccess 908, failed 0 00:23:05.476 13:52:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:05.476 13:52:07 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:05.476 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.765 Initializing NVMe Controllers 00:23:08.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:08.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:08.765 Initialization complete. Launching workers. 00:23:08.765 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31469, failed: 0 00:23:08.765 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2819, failed to submit 28650 00:23:08.765 success 565, unsuccess 2254, failed 0 00:23:08.765 13:52:10 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:08.765 13:52:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.765 13:52:10 -- common/autotest_common.sh@10 -- # set +x 00:23:08.765 13:52:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.765 13:52:10 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:08.765 13:52:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.765 13:52:10 -- common/autotest_common.sh@10 -- # set +x 00:23:09.734 13:52:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.734 13:52:12 -- target/abort_qd_sizes.sh@61 -- # killprocess 2704577 00:23:09.734 13:52:12 -- common/autotest_common.sh@936 -- # '[' -z 2704577 ']' 00:23:09.734 13:52:12 -- common/autotest_common.sh@940 -- # kill -0 2704577 00:23:09.734 13:52:12 -- common/autotest_common.sh@941 -- # uname 00:23:09.734 13:52:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:09.734 13:52:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2704577 00:23:09.734 13:52:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:09.734 13:52:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:09.734 13:52:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2704577' 00:23:09.734 killing process with pid 2704577 00:23:09.734 13:52:12 -- common/autotest_common.sh@955 -- # kill 2704577 00:23:09.734 13:52:12 -- common/autotest_common.sh@960 -- # wait 2704577 00:23:09.734 00:23:09.734 real 0m14.157s 00:23:09.734 user 0m53.673s 00:23:09.734 sys 0m2.879s 00:23:09.734 13:52:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:09.734 13:52:12 -- common/autotest_common.sh@10 -- # set +x 00:23:09.734 ************************************ 00:23:09.734 END TEST spdk_target_abort 00:23:09.734 ************************************ 00:23:09.993 13:52:12 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:09.993 13:52:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:09.993 13:52:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:09.993 13:52:12 -- common/autotest_common.sh@10 -- # set +x 00:23:09.993 ************************************ 00:23:09.993 START TEST kernel_target_abort 00:23:09.993 ************************************ 00:23:09.993 13:52:12 -- common/autotest_common.sh@1111 -- # kernel_target 00:23:09.993 13:52:12 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:09.993 13:52:12 -- nvmf/common.sh@717 -- # local ip 00:23:09.993 13:52:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:09.993 13:52:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:09.993 13:52:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.993 13:52:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.993 13:52:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:09.993 13:52:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.993 13:52:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:09.993 13:52:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:09.993 13:52:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:09.993 13:52:12 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:09.993 13:52:12 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:09.993 13:52:12 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:09.993 13:52:12 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:09.993 13:52:12 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:09.993 13:52:12 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:09.993 13:52:12 -- nvmf/common.sh@628 -- # local block nvme 00:23:09.993 13:52:12 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:09.993 13:52:12 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:09.993 13:52:12 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:09.993 13:52:12 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:10.928 Waiting for block devices as requested 00:23:10.928 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:23:11.187 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:11.187 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:11.187 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:11.187 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:11.446 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:11.446 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:11.446 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:11.446 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:11.446 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:11.706 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:11.706 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:11.706 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:11.965 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:11.966 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:11.966 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:11.966 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:12.225 13:52:14 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:12.225 13:52:14 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:12.225 13:52:14 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:12.225 13:52:14 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:12.225 13:52:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:12.225 13:52:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:12.225 13:52:14 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:12.225 13:52:14 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:12.225 13:52:14 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:12.225 No valid GPT data, bailing 00:23:12.225 13:52:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:12.225 13:52:14 -- scripts/common.sh@391 -- # pt= 00:23:12.225 13:52:14 -- scripts/common.sh@392 -- # return 1 00:23:12.225 13:52:14 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:12.225 13:52:14 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:23:12.225 13:52:14 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:12.225 13:52:14 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:12.225 13:52:14 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:12.225 13:52:14 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:12.225 13:52:14 -- nvmf/common.sh@656 -- # echo 1 00:23:12.225 13:52:14 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:23:12.225 13:52:14 -- nvmf/common.sh@658 -- # echo 1 00:23:12.225 13:52:14 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:23:12.225 13:52:14 -- nvmf/common.sh@661 -- # echo tcp 00:23:12.225 13:52:14 -- nvmf/common.sh@662 -- # echo 4420 00:23:12.225 13:52:14 -- nvmf/common.sh@663 -- # echo ipv4 00:23:12.225 13:52:14 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:12.225 13:52:14 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:23:12.225 00:23:12.225 Discovery Log Number of Records 2, Generation counter 2 00:23:12.225 =====Discovery Log Entry 0====== 00:23:12.225 trtype: tcp 00:23:12.225 adrfam: ipv4 00:23:12.225 subtype: current discovery subsystem 00:23:12.225 treq: not specified, sq flow control disable supported 00:23:12.226 portid: 1 00:23:12.226 trsvcid: 4420 00:23:12.226 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:12.226 traddr: 10.0.0.1 00:23:12.226 eflags: none 00:23:12.226 sectype: none 00:23:12.226 =====Discovery Log Entry 1====== 00:23:12.226 trtype: tcp 00:23:12.226 adrfam: ipv4 00:23:12.226 subtype: nvme subsystem 00:23:12.226 treq: not specified, sq flow control disable supported 00:23:12.226 portid: 1 00:23:12.226 trsvcid: 4420 00:23:12.226 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:12.226 traddr: 10.0.0.1 00:23:12.226 eflags: none 00:23:12.226 sectype: none 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:12.226 13:52:14 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:12.226 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.512 Initializing NVMe Controllers 00:23:15.512 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:15.512 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:15.512 Initialization complete. Launching workers. 00:23:15.512 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37348, failed: 0 00:23:15.512 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37348, failed to submit 0 00:23:15.512 success 0, unsuccess 37348, failed 0 00:23:15.512 13:52:17 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:15.512 13:52:17 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:15.512 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.800 Initializing NVMe Controllers 00:23:18.800 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:18.800 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:18.800 Initialization complete. Launching workers. 00:23:18.800 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70006, failed: 0 00:23:18.800 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17658, failed to submit 52348 00:23:18.800 success 0, unsuccess 17658, failed 0 00:23:18.800 13:52:21 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:18.800 13:52:21 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:18.800 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.345 Initializing NVMe Controllers 00:23:21.345 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:21.345 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:21.345 Initialization complete. Launching workers. 00:23:21.345 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71110, failed: 0 00:23:21.345 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17762, failed to submit 53348 00:23:21.345 success 0, unsuccess 17762, failed 0 00:23:21.345 13:52:24 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:23:21.345 13:52:24 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:21.345 13:52:24 -- nvmf/common.sh@675 -- # echo 0 00:23:21.345 13:52:24 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:21.345 13:52:24 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:21.345 13:52:24 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:21.345 13:52:24 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:21.345 13:52:24 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:21.345 13:52:24 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:23:21.604 13:52:24 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:22.538 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:22.538 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:22.538 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:22.538 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:22.538 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:22.538 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:22.538 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:22.538 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:22.538 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:22.538 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:22.538 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:22.538 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:22.538 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:22.538 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:22.538 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:22.538 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:23.475 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:23:23.733 00:23:23.733 real 0m13.705s 00:23:23.733 user 0m5.514s 00:23:23.733 sys 0m3.009s 00:23:23.733 13:52:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:23.733 13:52:26 -- common/autotest_common.sh@10 -- # set +x 00:23:23.733 ************************************ 00:23:23.733 END TEST kernel_target_abort 00:23:23.733 ************************************ 00:23:23.733 13:52:26 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:23.733 13:52:26 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:23.733 13:52:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:23.733 13:52:26 -- nvmf/common.sh@117 -- # sync 00:23:23.733 13:52:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:23.733 13:52:26 -- nvmf/common.sh@120 -- # set +e 00:23:23.733 13:52:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:23.733 13:52:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:23.733 rmmod nvme_tcp 00:23:23.733 rmmod nvme_fabrics 00:23:23.733 rmmod nvme_keyring 00:23:23.733 13:52:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:23.733 13:52:26 -- nvmf/common.sh@124 -- # set -e 00:23:23.733 13:52:26 -- nvmf/common.sh@125 -- # return 0 00:23:23.733 13:52:26 -- nvmf/common.sh@478 -- # '[' -n 2704577 ']' 00:23:23.733 13:52:26 -- nvmf/common.sh@479 -- # killprocess 2704577 00:23:23.733 13:52:26 -- common/autotest_common.sh@936 -- # '[' -z 2704577 ']' 00:23:23.733 13:52:26 -- common/autotest_common.sh@940 -- # kill -0 2704577 00:23:23.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2704577) - No such process 00:23:23.733 13:52:26 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2704577 is not found' 00:23:23.733 Process with pid 2704577 is not found 00:23:23.733 13:52:26 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:23:23.733 13:52:26 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:24.672 Waiting for block devices as requested 00:23:24.672 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:23:24.935 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:24.935 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:24.935 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:24.935 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:24.935 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:25.196 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:25.196 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:25.196 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:25.196 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:25.455 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:25.455 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:25.455 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:25.713 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:25.713 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:25.713 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:25.713 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:25.973 13:52:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:25.973 13:52:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:25.973 13:52:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:25.973 13:52:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:25.973 13:52:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.973 13:52:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:25.973 13:52:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.889 13:52:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:27.889 00:23:27.889 real 0m36.969s 00:23:27.889 user 1m1.119s 00:23:27.889 sys 0m9.061s 00:23:27.889 13:52:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:27.889 13:52:30 -- common/autotest_common.sh@10 -- # set +x 00:23:27.889 ************************************ 00:23:27.889 END TEST nvmf_abort_qd_sizes 00:23:27.889 ************************************ 00:23:27.889 13:52:30 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:23:27.889 13:52:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:27.889 13:52:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:27.889 13:52:30 -- common/autotest_common.sh@10 -- # set +x 00:23:28.146 ************************************ 00:23:28.146 START TEST keyring_file 00:23:28.147 ************************************ 00:23:28.147 13:52:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:23:28.147 * Looking for test storage... 00:23:28.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:23:28.147 13:52:30 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:23:28.147 13:52:30 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:28.147 13:52:30 -- nvmf/common.sh@7 -- # uname -s 00:23:28.147 13:52:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.147 13:52:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.147 13:52:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.147 13:52:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.147 13:52:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.147 13:52:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.147 13:52:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.147 13:52:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.147 13:52:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.147 13:52:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.147 13:52:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:28.147 13:52:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:28.147 13:52:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.147 13:52:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.147 13:52:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:28.147 13:52:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.147 13:52:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:28.147 13:52:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.147 13:52:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.147 13:52:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.147 13:52:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.147 13:52:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.147 13:52:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.147 13:52:30 -- paths/export.sh@5 -- # export PATH 00:23:28.147 13:52:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.147 13:52:30 -- nvmf/common.sh@47 -- # : 0 00:23:28.147 13:52:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:28.147 13:52:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:28.147 13:52:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.147 13:52:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.147 13:52:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.147 13:52:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:28.147 13:52:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:28.147 13:52:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:28.147 13:52:30 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:28.147 13:52:30 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:28.147 13:52:30 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:28.147 13:52:30 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:28.147 13:52:30 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:28.147 13:52:30 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:28.147 13:52:30 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:28.147 13:52:30 -- keyring/common.sh@15 -- # local name key digest path 00:23:28.147 13:52:30 -- keyring/common.sh@17 -- # name=key0 00:23:28.147 13:52:30 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:28.147 13:52:30 -- keyring/common.sh@17 -- # digest=0 00:23:28.147 13:52:30 -- keyring/common.sh@18 -- # mktemp 00:23:28.147 13:52:30 -- keyring/common.sh@18 -- # path=/tmp/tmp.uxFUOK6YYR 00:23:28.147 13:52:30 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:28.147 13:52:30 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:28.147 13:52:30 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:28.147 13:52:30 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:28.147 13:52:30 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:23:28.147 13:52:30 -- nvmf/common.sh@693 -- # digest=0 00:23:28.147 13:52:30 -- nvmf/common.sh@694 -- # python - 00:23:28.147 13:52:30 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uxFUOK6YYR 00:23:28.147 13:52:30 -- keyring/common.sh@23 -- # echo /tmp/tmp.uxFUOK6YYR 00:23:28.147 13:52:30 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.uxFUOK6YYR 00:23:28.147 13:52:30 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:28.147 13:52:30 -- keyring/common.sh@15 -- # local name key digest path 00:23:28.147 13:52:30 -- keyring/common.sh@17 -- # name=key1 00:23:28.147 13:52:30 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:28.147 13:52:30 -- keyring/common.sh@17 -- # digest=0 00:23:28.147 13:52:30 -- keyring/common.sh@18 -- # mktemp 00:23:28.147 13:52:30 -- keyring/common.sh@18 -- # path=/tmp/tmp.TUyuwjYN2S 00:23:28.147 13:52:30 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:28.147 13:52:30 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:28.147 13:52:30 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:28.147 13:52:30 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:28.147 13:52:30 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:23:28.147 13:52:30 -- nvmf/common.sh@693 -- # digest=0 00:23:28.147 13:52:30 -- nvmf/common.sh@694 -- # python - 00:23:28.147 13:52:30 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TUyuwjYN2S 00:23:28.147 13:52:30 -- keyring/common.sh@23 -- # echo /tmp/tmp.TUyuwjYN2S 00:23:28.147 13:52:30 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.TUyuwjYN2S 00:23:28.147 13:52:30 -- keyring/file.sh@30 -- # tgtpid=2710352 00:23:28.147 13:52:30 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:23:28.147 13:52:30 -- keyring/file.sh@32 -- # waitforlisten 2710352 00:23:28.147 13:52:30 -- common/autotest_common.sh@817 -- # '[' -z 2710352 ']' 00:23:28.147 13:52:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.147 13:52:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:28.147 13:52:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.147 13:52:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:28.147 13:52:30 -- common/autotest_common.sh@10 -- # set +x 00:23:28.147 [2024-04-18 13:52:30.939831] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:23:28.147 [2024-04-18 13:52:30.939918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2710352 ] 00:23:28.406 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.406 [2024-04-18 13:52:30.999442] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.406 [2024-04-18 13:52:31.104981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.663 13:52:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:28.663 13:52:31 -- common/autotest_common.sh@850 -- # return 0 00:23:28.663 13:52:31 -- keyring/file.sh@33 -- # rpc_cmd 00:23:28.663 13:52:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.663 13:52:31 -- common/autotest_common.sh@10 -- # set +x 00:23:28.663 [2024-04-18 13:52:31.379362] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.663 null0 00:23:28.663 [2024-04-18 13:52:31.411426] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:28.663 [2024-04-18 13:52:31.411791] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:28.663 [2024-04-18 13:52:31.419432] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:28.663 13:52:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.663 13:52:31 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:28.663 13:52:31 -- common/autotest_common.sh@638 -- # local es=0 00:23:28.663 13:52:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:28.663 13:52:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:28.663 13:52:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:28.663 13:52:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:28.663 13:52:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:28.663 13:52:31 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:28.663 13:52:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.663 13:52:31 -- common/autotest_common.sh@10 -- # set +x 00:23:28.663 [2024-04-18 13:52:31.427452] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:23:28.663 { 00:23:28.663 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:28.663 "secure_channel": false, 00:23:28.663 "listen_address": { 00:23:28.663 "trtype": "tcp", 00:23:28.663 "traddr": "127.0.0.1", 00:23:28.663 "trsvcid": "4420" 00:23:28.663 }, 00:23:28.663 "method": "nvmf_subsystem_add_listener", 00:23:28.663 "req_id": 1 00:23:28.663 } 00:23:28.663 Got JSON-RPC error response 00:23:28.663 response: 00:23:28.663 { 00:23:28.663 "code": -32602, 00:23:28.663 "message": "Invalid parameters" 00:23:28.663 } 00:23:28.663 13:52:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:28.663 13:52:31 -- common/autotest_common.sh@641 -- # es=1 00:23:28.663 13:52:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:28.664 13:52:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:28.664 13:52:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:28.664 13:52:31 -- keyring/file.sh@46 -- # bperfpid=2710384 00:23:28.664 13:52:31 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:28.664 13:52:31 -- keyring/file.sh@48 -- # waitforlisten 2710384 /var/tmp/bperf.sock 00:23:28.664 13:52:31 -- common/autotest_common.sh@817 -- # '[' -z 2710384 ']' 00:23:28.664 13:52:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:28.664 13:52:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:28.664 13:52:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:28.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:28.664 13:52:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:28.664 13:52:31 -- common/autotest_common.sh@10 -- # set +x 00:23:28.922 [2024-04-18 13:52:31.476660] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:23:28.922 [2024-04-18 13:52:31.476729] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2710384 ] 00:23:28.922 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.922 [2024-04-18 13:52:31.534669] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.922 [2024-04-18 13:52:31.646393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.181 13:52:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:29.181 13:52:31 -- common/autotest_common.sh@850 -- # return 0 00:23:29.181 13:52:31 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uxFUOK6YYR 00:23:29.181 13:52:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uxFUOK6YYR 00:23:29.438 13:52:32 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TUyuwjYN2S 00:23:29.438 13:52:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TUyuwjYN2S 00:23:29.698 13:52:32 -- keyring/file.sh@51 -- # get_key key0 00:23:29.698 13:52:32 -- keyring/file.sh@51 -- # jq -r .path 00:23:29.698 13:52:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:29.698 13:52:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:29.698 13:52:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:29.957 13:52:32 -- keyring/file.sh@51 -- # [[ /tmp/tmp.uxFUOK6YYR == \/\t\m\p\/\t\m\p\.\u\x\F\U\O\K\6\Y\Y\R ]] 00:23:29.957 13:52:32 -- keyring/file.sh@52 -- # get_key key1 00:23:29.957 13:52:32 -- keyring/file.sh@52 -- # jq -r .path 00:23:29.957 13:52:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:29.957 13:52:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:29.957 13:52:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:29.957 13:52:32 -- keyring/file.sh@52 -- # [[ /tmp/tmp.TUyuwjYN2S == \/\t\m\p\/\t\m\p\.\T\U\y\u\w\j\Y\N\2\S ]] 00:23:29.957 13:52:32 -- keyring/file.sh@53 -- # get_refcnt key0 00:23:29.957 13:52:32 -- keyring/common.sh@12 -- # get_key key0 00:23:29.957 13:52:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:29.957 13:52:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:29.957 13:52:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:29.957 13:52:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:30.215 13:52:32 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:23:30.215 13:52:32 -- keyring/file.sh@54 -- # get_refcnt key1 00:23:30.215 13:52:32 -- keyring/common.sh@12 -- # get_key key1 00:23:30.215 13:52:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:30.215 13:52:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:30.215 13:52:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:30.215 13:52:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:30.485 13:52:33 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:30.485 13:52:33 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:30.485 13:52:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:30.745 [2024-04-18 13:52:33.479508] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.005 nvme0n1 00:23:31.005 13:52:33 -- keyring/file.sh@59 -- # get_refcnt key0 00:23:31.005 13:52:33 -- keyring/common.sh@12 -- # get_key key0 00:23:31.005 13:52:33 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:31.005 13:52:33 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:31.005 13:52:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:31.005 13:52:33 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:31.263 13:52:33 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:23:31.263 13:52:33 -- keyring/file.sh@60 -- # get_refcnt key1 00:23:31.263 13:52:33 -- keyring/common.sh@12 -- # get_key key1 00:23:31.263 13:52:33 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:31.263 13:52:33 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:31.263 13:52:33 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:31.263 13:52:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:31.263 13:52:34 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:23:31.263 13:52:34 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:31.523 Running I/O for 1 seconds... 00:23:32.459 00:23:32.459 Latency(us) 00:23:32.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.459 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:32.459 nvme0n1 : 1.02 6332.32 24.74 0.00 0.00 19996.55 4102.07 25437.68 00:23:32.459 =================================================================================================================== 00:23:32.459 Total : 6332.32 24.74 0.00 0.00 19996.55 4102.07 25437.68 00:23:32.459 0 00:23:32.459 13:52:35 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:32.459 13:52:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:32.754 13:52:35 -- keyring/file.sh@65 -- # get_refcnt key0 00:23:32.754 13:52:35 -- keyring/common.sh@12 -- # get_key key0 00:23:32.754 13:52:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:32.754 13:52:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:32.754 13:52:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:32.754 13:52:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:33.012 13:52:35 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:23:33.012 13:52:35 -- keyring/file.sh@66 -- # get_refcnt key1 00:23:33.012 13:52:35 -- keyring/common.sh@12 -- # get_key key1 00:23:33.012 13:52:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:33.012 13:52:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:33.012 13:52:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:33.012 13:52:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:33.269 13:52:35 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:33.269 13:52:35 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:33.269 13:52:35 -- common/autotest_common.sh@638 -- # local es=0 00:23:33.269 13:52:35 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:33.269 13:52:35 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:33.269 13:52:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:33.269 13:52:35 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:33.269 13:52:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:33.269 13:52:35 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:33.269 13:52:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:33.527 [2024-04-18 13:52:36.184543] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:33.527 [2024-04-18 13:52:36.184616] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4430 (107): Transport endpoint is not connected 00:23:33.527 [2024-04-18 13:52:36.185606] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed4430 (9): Bad file descriptor 00:23:33.527 [2024-04-18 13:52:36.186605] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:33.527 [2024-04-18 13:52:36.186625] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:33.527 [2024-04-18 13:52:36.186639] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:33.527 request: 00:23:33.527 { 00:23:33.527 "name": "nvme0", 00:23:33.527 "trtype": "tcp", 00:23:33.527 "traddr": "127.0.0.1", 00:23:33.527 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:33.527 "adrfam": "ipv4", 00:23:33.527 "trsvcid": "4420", 00:23:33.527 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:33.527 "psk": "key1", 00:23:33.527 "method": "bdev_nvme_attach_controller", 00:23:33.527 "req_id": 1 00:23:33.527 } 00:23:33.527 Got JSON-RPC error response 00:23:33.527 response: 00:23:33.527 { 00:23:33.527 "code": -32602, 00:23:33.527 "message": "Invalid parameters" 00:23:33.527 } 00:23:33.527 13:52:36 -- common/autotest_common.sh@641 -- # es=1 00:23:33.527 13:52:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:33.527 13:52:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:33.527 13:52:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:33.527 13:52:36 -- keyring/file.sh@71 -- # get_refcnt key0 00:23:33.527 13:52:36 -- keyring/common.sh@12 -- # get_key key0 00:23:33.527 13:52:36 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:33.527 13:52:36 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:33.527 13:52:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:33.527 13:52:36 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:33.785 13:52:36 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:23:33.785 13:52:36 -- keyring/file.sh@72 -- # get_refcnt key1 00:23:33.785 13:52:36 -- keyring/common.sh@12 -- # get_key key1 00:23:33.785 13:52:36 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:33.785 13:52:36 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:33.785 13:52:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:33.785 13:52:36 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:34.052 13:52:36 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:23:34.052 13:52:36 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:23:34.052 13:52:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:34.310 13:52:36 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:23:34.310 13:52:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:23:34.568 13:52:37 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:23:34.568 13:52:37 -- keyring/file.sh@77 -- # jq length 00:23:34.568 13:52:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:34.827 13:52:37 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:23:34.827 13:52:37 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.uxFUOK6YYR 00:23:34.827 13:52:37 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.uxFUOK6YYR 00:23:34.827 13:52:37 -- common/autotest_common.sh@638 -- # local es=0 00:23:34.827 13:52:37 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.uxFUOK6YYR 00:23:34.827 13:52:37 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:34.827 13:52:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:34.827 13:52:37 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:34.827 13:52:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:34.827 13:52:37 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uxFUOK6YYR 00:23:34.827 13:52:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uxFUOK6YYR 00:23:35.085 [2024-04-18 13:52:37.636484] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.uxFUOK6YYR': 0100660 00:23:35.085 [2024-04-18 13:52:37.636537] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:35.085 request: 00:23:35.085 { 00:23:35.085 "name": "key0", 00:23:35.085 "path": "/tmp/tmp.uxFUOK6YYR", 00:23:35.085 "method": "keyring_file_add_key", 00:23:35.085 "req_id": 1 00:23:35.085 } 00:23:35.085 Got JSON-RPC error response 00:23:35.085 response: 00:23:35.085 { 00:23:35.085 "code": -1, 00:23:35.085 "message": "Operation not permitted" 00:23:35.085 } 00:23:35.085 13:52:37 -- common/autotest_common.sh@641 -- # es=1 00:23:35.085 13:52:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:35.085 13:52:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:35.085 13:52:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:35.085 13:52:37 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.uxFUOK6YYR 00:23:35.085 13:52:37 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uxFUOK6YYR 00:23:35.085 13:52:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uxFUOK6YYR 00:23:35.343 13:52:37 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.uxFUOK6YYR 00:23:35.343 13:52:37 -- keyring/file.sh@88 -- # get_refcnt key0 00:23:35.343 13:52:37 -- keyring/common.sh@12 -- # get_key key0 00:23:35.343 13:52:37 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:35.343 13:52:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:35.343 13:52:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:35.343 13:52:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:35.344 13:52:38 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:23:35.344 13:52:38 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:35.344 13:52:38 -- common/autotest_common.sh@638 -- # local es=0 00:23:35.344 13:52:38 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:35.344 13:52:38 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:35.344 13:52:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:35.344 13:52:38 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:35.344 13:52:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:35.344 13:52:38 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:35.344 13:52:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:35.602 [2024-04-18 13:52:38.378573] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.uxFUOK6YYR': No such file or directory 00:23:35.602 [2024-04-18 13:52:38.378619] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:35.602 [2024-04-18 13:52:38.378646] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:35.602 [2024-04-18 13:52:38.378657] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:35.602 [2024-04-18 13:52:38.378668] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:35.602 request: 00:23:35.602 { 00:23:35.602 "name": "nvme0", 00:23:35.602 "trtype": "tcp", 00:23:35.602 "traddr": "127.0.0.1", 00:23:35.602 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:35.602 "adrfam": "ipv4", 00:23:35.602 "trsvcid": "4420", 00:23:35.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:35.602 "psk": "key0", 00:23:35.602 "method": "bdev_nvme_attach_controller", 00:23:35.602 "req_id": 1 00:23:35.602 } 00:23:35.602 Got JSON-RPC error response 00:23:35.602 response: 00:23:35.602 { 00:23:35.602 "code": -19, 00:23:35.602 "message": "No such device" 00:23:35.602 } 00:23:35.602 13:52:38 -- common/autotest_common.sh@641 -- # es=1 00:23:35.602 13:52:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:35.602 13:52:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:35.602 13:52:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:35.602 13:52:38 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:23:35.602 13:52:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:35.861 13:52:38 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:35.861 13:52:38 -- keyring/common.sh@15 -- # local name key digest path 00:23:35.861 13:52:38 -- keyring/common.sh@17 -- # name=key0 00:23:35.861 13:52:38 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:35.861 13:52:38 -- keyring/common.sh@17 -- # digest=0 00:23:35.861 13:52:38 -- keyring/common.sh@18 -- # mktemp 00:23:35.861 13:52:38 -- keyring/common.sh@18 -- # path=/tmp/tmp.WFmx1uLw15 00:23:35.861 13:52:38 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:35.861 13:52:38 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:35.861 13:52:38 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:35.861 13:52:38 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:35.861 13:52:38 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:23:35.861 13:52:38 -- nvmf/common.sh@693 -- # digest=0 00:23:35.861 13:52:38 -- nvmf/common.sh@694 -- # python - 00:23:36.203 13:52:38 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WFmx1uLw15 00:23:36.203 13:52:38 -- keyring/common.sh@23 -- # echo /tmp/tmp.WFmx1uLw15 00:23:36.203 13:52:38 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.WFmx1uLw15 00:23:36.203 13:52:38 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WFmx1uLw15 00:23:36.203 13:52:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WFmx1uLw15 00:23:36.203 13:52:38 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:36.203 13:52:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:36.461 nvme0n1 00:23:36.461 13:52:39 -- keyring/file.sh@99 -- # get_refcnt key0 00:23:36.461 13:52:39 -- keyring/common.sh@12 -- # get_key key0 00:23:36.461 13:52:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:36.461 13:52:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:36.461 13:52:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:36.461 13:52:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:36.719 13:52:39 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:23:36.719 13:52:39 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:23:36.719 13:52:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:36.977 13:52:39 -- keyring/file.sh@101 -- # get_key key0 00:23:36.977 13:52:39 -- keyring/file.sh@101 -- # jq -r .removed 00:23:36.977 13:52:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:36.977 13:52:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:36.977 13:52:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:37.236 13:52:39 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:23:37.236 13:52:39 -- keyring/file.sh@102 -- # get_refcnt key0 00:23:37.236 13:52:39 -- keyring/common.sh@12 -- # get_key key0 00:23:37.236 13:52:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:37.236 13:52:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:37.236 13:52:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:37.236 13:52:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:37.494 13:52:40 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:23:37.494 13:52:40 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:37.494 13:52:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:37.752 13:52:40 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:23:37.752 13:52:40 -- keyring/file.sh@104 -- # jq length 00:23:37.752 13:52:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:38.011 13:52:40 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:23:38.011 13:52:40 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WFmx1uLw15 00:23:38.011 13:52:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WFmx1uLw15 00:23:38.268 13:52:40 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TUyuwjYN2S 00:23:38.268 13:52:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TUyuwjYN2S 00:23:38.526 13:52:41 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:38.526 13:52:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:38.783 nvme0n1 00:23:38.783 13:52:41 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:23:38.783 13:52:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:23:39.040 13:52:41 -- keyring/file.sh@112 -- # config='{ 00:23:39.040 "subsystems": [ 00:23:39.040 { 00:23:39.040 "subsystem": "keyring", 00:23:39.040 "config": [ 00:23:39.040 { 00:23:39.040 "method": "keyring_file_add_key", 00:23:39.040 "params": { 00:23:39.040 "name": "key0", 00:23:39.040 "path": "/tmp/tmp.WFmx1uLw15" 00:23:39.040 } 00:23:39.040 }, 00:23:39.040 { 00:23:39.040 "method": "keyring_file_add_key", 00:23:39.040 "params": { 00:23:39.040 "name": "key1", 00:23:39.040 "path": "/tmp/tmp.TUyuwjYN2S" 00:23:39.040 } 00:23:39.040 } 00:23:39.040 ] 00:23:39.040 }, 00:23:39.040 { 00:23:39.040 "subsystem": "iobuf", 00:23:39.040 "config": [ 00:23:39.040 { 00:23:39.040 "method": "iobuf_set_options", 00:23:39.040 "params": { 00:23:39.040 "small_pool_count": 8192, 00:23:39.040 "large_pool_count": 1024, 00:23:39.040 "small_bufsize": 8192, 00:23:39.040 "large_bufsize": 135168 00:23:39.040 } 00:23:39.040 } 00:23:39.040 ] 00:23:39.040 }, 00:23:39.040 { 00:23:39.040 "subsystem": "sock", 00:23:39.040 "config": [ 00:23:39.040 { 00:23:39.040 "method": "sock_impl_set_options", 00:23:39.040 "params": { 00:23:39.040 "impl_name": "posix", 00:23:39.040 "recv_buf_size": 2097152, 00:23:39.040 "send_buf_size": 2097152, 00:23:39.040 "enable_recv_pipe": true, 00:23:39.040 "enable_quickack": false, 00:23:39.040 "enable_placement_id": 0, 00:23:39.040 "enable_zerocopy_send_server": true, 00:23:39.040 "enable_zerocopy_send_client": false, 00:23:39.040 "zerocopy_threshold": 0, 00:23:39.040 "tls_version": 0, 00:23:39.040 "enable_ktls": false 00:23:39.040 } 00:23:39.040 }, 00:23:39.040 { 00:23:39.040 "method": "sock_impl_set_options", 00:23:39.040 "params": { 00:23:39.040 "impl_name": "ssl", 00:23:39.040 "recv_buf_size": 4096, 00:23:39.040 "send_buf_size": 4096, 00:23:39.040 "enable_recv_pipe": true, 00:23:39.040 "enable_quickack": false, 00:23:39.040 "enable_placement_id": 0, 00:23:39.040 "enable_zerocopy_send_server": true, 00:23:39.040 "enable_zerocopy_send_client": false, 00:23:39.040 "zerocopy_threshold": 0, 00:23:39.040 "tls_version": 0, 00:23:39.040 "enable_ktls": false 00:23:39.040 } 00:23:39.040 } 00:23:39.040 ] 00:23:39.040 }, 00:23:39.040 { 00:23:39.040 "subsystem": "vmd", 00:23:39.040 "config": [] 00:23:39.040 }, 00:23:39.040 { 00:23:39.040 "subsystem": "accel", 00:23:39.040 "config": [ 00:23:39.040 { 00:23:39.040 "method": "accel_set_options", 00:23:39.040 "params": { 00:23:39.040 "small_cache_size": 128, 00:23:39.040 "large_cache_size": 16, 00:23:39.040 "task_count": 2048, 00:23:39.040 "sequence_count": 2048, 00:23:39.040 "buf_count": 2048 00:23:39.041 } 00:23:39.041 } 00:23:39.041 ] 00:23:39.041 }, 00:23:39.041 { 00:23:39.041 "subsystem": "bdev", 00:23:39.041 "config": [ 00:23:39.041 { 00:23:39.041 "method": "bdev_set_options", 00:23:39.041 "params": { 00:23:39.041 "bdev_io_pool_size": 65535, 00:23:39.041 "bdev_io_cache_size": 256, 00:23:39.041 "bdev_auto_examine": true, 00:23:39.041 "iobuf_small_cache_size": 128, 00:23:39.041 "iobuf_large_cache_size": 16 00:23:39.041 } 00:23:39.041 }, 00:23:39.041 { 00:23:39.041 "method": "bdev_raid_set_options", 00:23:39.041 "params": { 00:23:39.041 "process_window_size_kb": 1024 00:23:39.041 } 00:23:39.041 }, 00:23:39.041 { 00:23:39.041 "method": "bdev_iscsi_set_options", 00:23:39.041 "params": { 00:23:39.041 "timeout_sec": 30 00:23:39.041 } 00:23:39.041 }, 00:23:39.041 { 00:23:39.041 "method": "bdev_nvme_set_options", 00:23:39.041 "params": { 00:23:39.041 "action_on_timeout": "none", 00:23:39.041 "timeout_us": 0, 00:23:39.041 "timeout_admin_us": 0, 00:23:39.041 "keep_alive_timeout_ms": 10000, 00:23:39.041 "arbitration_burst": 0, 00:23:39.041 "low_priority_weight": 0, 00:23:39.041 "medium_priority_weight": 0, 00:23:39.041 "high_priority_weight": 0, 00:23:39.041 "nvme_adminq_poll_period_us": 10000, 00:23:39.041 "nvme_ioq_poll_period_us": 0, 00:23:39.041 "io_queue_requests": 512, 00:23:39.041 "delay_cmd_submit": true, 00:23:39.041 "transport_retry_count": 4, 00:23:39.041 "bdev_retry_count": 3, 00:23:39.041 "transport_ack_timeout": 0, 00:23:39.041 "ctrlr_loss_timeout_sec": 0, 00:23:39.041 "reconnect_delay_sec": 0, 00:23:39.041 "fast_io_fail_timeout_sec": 0, 00:23:39.041 "disable_auto_failback": false, 00:23:39.041 "generate_uuids": false, 00:23:39.041 "transport_tos": 0, 00:23:39.041 "nvme_error_stat": false, 00:23:39.041 "rdma_srq_size": 0, 00:23:39.041 "io_path_stat": false, 00:23:39.041 "allow_accel_sequence": false, 00:23:39.041 "rdma_max_cq_size": 0, 00:23:39.041 "rdma_cm_event_timeout_ms": 0, 00:23:39.041 "dhchap_digests": [ 00:23:39.041 "sha256", 00:23:39.041 "sha384", 00:23:39.041 "sha512" 00:23:39.041 ], 00:23:39.041 "dhchap_dhgroups": [ 00:23:39.041 "null", 00:23:39.041 "ffdhe2048", 00:23:39.041 "ffdhe3072", 00:23:39.041 "ffdhe4096", 00:23:39.041 "ffdhe6144", 00:23:39.041 "ffdhe8192" 00:23:39.041 ] 00:23:39.041 } 00:23:39.041 }, 00:23:39.041 { 00:23:39.041 "method": "bdev_nvme_attach_controller", 00:23:39.041 "params": { 00:23:39.041 "name": "nvme0", 00:23:39.041 "trtype": "TCP", 00:23:39.041 "adrfam": "IPv4", 00:23:39.041 "traddr": "127.0.0.1", 00:23:39.041 "trsvcid": "4420", 00:23:39.041 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:39.041 "prchk_reftag": false, 00:23:39.041 "prchk_guard": false, 00:23:39.041 "ctrlr_loss_timeout_sec": 0, 00:23:39.041 "reconnect_delay_sec": 0, 00:23:39.041 "fast_io_fail_timeout_sec": 0, 00:23:39.041 "psk": "key0", 00:23:39.041 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:39.041 "hdgst": false, 00:23:39.041 "ddgst": false 00:23:39.041 } 00:23:39.041 }, 00:23:39.041 { 00:23:39.041 "method": "bdev_nvme_set_hotplug", 00:23:39.041 "params": { 00:23:39.041 "period_us": 100000, 00:23:39.041 "enable": false 00:23:39.041 } 00:23:39.041 }, 00:23:39.041 { 00:23:39.041 "method": "bdev_wait_for_examine" 00:23:39.041 } 00:23:39.041 ] 00:23:39.041 }, 00:23:39.041 { 00:23:39.041 "subsystem": "nbd", 00:23:39.041 "config": [] 00:23:39.041 } 00:23:39.041 ] 00:23:39.041 }' 00:23:39.041 13:52:41 -- keyring/file.sh@114 -- # killprocess 2710384 00:23:39.041 13:52:41 -- common/autotest_common.sh@936 -- # '[' -z 2710384 ']' 00:23:39.041 13:52:41 -- common/autotest_common.sh@940 -- # kill -0 2710384 00:23:39.041 13:52:41 -- common/autotest_common.sh@941 -- # uname 00:23:39.041 13:52:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:39.041 13:52:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2710384 00:23:39.299 13:52:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:39.299 13:52:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:39.299 13:52:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2710384' 00:23:39.299 killing process with pid 2710384 00:23:39.299 13:52:41 -- common/autotest_common.sh@955 -- # kill 2710384 00:23:39.299 Received shutdown signal, test time was about 1.000000 seconds 00:23:39.299 00:23:39.299 Latency(us) 00:23:39.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.299 =================================================================================================================== 00:23:39.299 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:39.299 13:52:41 -- common/autotest_common.sh@960 -- # wait 2710384 00:23:39.558 13:52:42 -- keyring/file.sh@117 -- # bperfpid=2711810 00:23:39.558 13:52:42 -- keyring/file.sh@119 -- # waitforlisten 2711810 /var/tmp/bperf.sock 00:23:39.558 13:52:42 -- common/autotest_common.sh@817 -- # '[' -z 2711810 ']' 00:23:39.558 13:52:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:39.558 13:52:42 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:23:39.558 13:52:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:39.558 13:52:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:39.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:39.558 13:52:42 -- keyring/file.sh@115 -- # echo '{ 00:23:39.558 "subsystems": [ 00:23:39.558 { 00:23:39.558 "subsystem": "keyring", 00:23:39.558 "config": [ 00:23:39.558 { 00:23:39.558 "method": "keyring_file_add_key", 00:23:39.558 "params": { 00:23:39.558 "name": "key0", 00:23:39.558 "path": "/tmp/tmp.WFmx1uLw15" 00:23:39.558 } 00:23:39.558 }, 00:23:39.558 { 00:23:39.558 "method": "keyring_file_add_key", 00:23:39.558 "params": { 00:23:39.558 "name": "key1", 00:23:39.558 "path": "/tmp/tmp.TUyuwjYN2S" 00:23:39.558 } 00:23:39.558 } 00:23:39.558 ] 00:23:39.558 }, 00:23:39.558 { 00:23:39.558 "subsystem": "iobuf", 00:23:39.558 "config": [ 00:23:39.558 { 00:23:39.558 "method": "iobuf_set_options", 00:23:39.558 "params": { 00:23:39.558 "small_pool_count": 8192, 00:23:39.558 "large_pool_count": 1024, 00:23:39.558 "small_bufsize": 8192, 00:23:39.558 "large_bufsize": 135168 00:23:39.558 } 00:23:39.558 } 00:23:39.558 ] 00:23:39.558 }, 00:23:39.558 { 00:23:39.558 "subsystem": "sock", 00:23:39.558 "config": [ 00:23:39.558 { 00:23:39.558 "method": "sock_impl_set_options", 00:23:39.558 "params": { 00:23:39.558 "impl_name": "posix", 00:23:39.558 "recv_buf_size": 2097152, 00:23:39.558 "send_buf_size": 2097152, 00:23:39.558 "enable_recv_pipe": true, 00:23:39.558 "enable_quickack": false, 00:23:39.558 "enable_placement_id": 0, 00:23:39.558 "enable_zerocopy_send_server": true, 00:23:39.558 "enable_zerocopy_send_client": false, 00:23:39.558 "zerocopy_threshold": 0, 00:23:39.558 "tls_version": 0, 00:23:39.558 "enable_ktls": false 00:23:39.558 } 00:23:39.558 }, 00:23:39.558 { 00:23:39.558 "method": "sock_impl_set_options", 00:23:39.558 "params": { 00:23:39.558 "impl_name": "ssl", 00:23:39.558 "recv_buf_size": 4096, 00:23:39.558 "send_buf_size": 4096, 00:23:39.558 "enable_recv_pipe": true, 00:23:39.558 "enable_quickack": false, 00:23:39.558 "enable_placement_id": 0, 00:23:39.558 "enable_zerocopy_send_server": true, 00:23:39.558 "enable_zerocopy_send_client": false, 00:23:39.558 "zerocopy_threshold": 0, 00:23:39.558 "tls_version": 0, 00:23:39.558 "enable_ktls": false 00:23:39.558 } 00:23:39.558 } 00:23:39.558 ] 00:23:39.558 }, 00:23:39.558 { 00:23:39.558 "subsystem": "vmd", 00:23:39.558 "config": [] 00:23:39.558 }, 00:23:39.558 { 00:23:39.558 "subsystem": "accel", 00:23:39.558 "config": [ 00:23:39.558 { 00:23:39.558 "method": "accel_set_options", 00:23:39.558 "params": { 00:23:39.558 "small_cache_size": 128, 00:23:39.558 "large_cache_size": 16, 00:23:39.558 "task_count": 2048, 00:23:39.558 "sequence_count": 2048, 00:23:39.558 "buf_count": 2048 00:23:39.558 } 00:23:39.558 } 00:23:39.558 ] 00:23:39.558 }, 00:23:39.558 { 00:23:39.558 "subsystem": "bdev", 00:23:39.558 "config": [ 00:23:39.558 { 00:23:39.558 "method": "bdev_set_options", 00:23:39.558 "params": { 00:23:39.558 "bdev_io_pool_size": 65535, 00:23:39.558 "bdev_io_cache_size": 256, 00:23:39.558 "bdev_auto_examine": true, 00:23:39.558 "iobuf_small_cache_size": 128, 00:23:39.558 "iobuf_large_cache_size": 16 00:23:39.558 } 00:23:39.558 }, 00:23:39.558 { 00:23:39.558 "method": "bdev_raid_set_options", 00:23:39.558 "params": { 00:23:39.558 "process_window_size_kb": 1024 00:23:39.558 } 00:23:39.558 }, 00:23:39.558 { 00:23:39.558 "method": "bdev_iscsi_set_options", 00:23:39.558 "params": { 00:23:39.558 "timeout_sec": 30 00:23:39.558 } 00:23:39.558 }, 00:23:39.558 { 00:23:39.558 "method": "bdev_nvme_set_options", 00:23:39.558 "params": { 00:23:39.558 "action_on_timeout": "none", 00:23:39.558 "timeout_us": 0, 00:23:39.558 "timeout_admin_us": 0, 00:23:39.558 "keep_alive_timeout_ms": 10000, 00:23:39.558 "arbitration_burst": 0, 00:23:39.558 "low_priority_weight": 0, 00:23:39.558 "medium_priority_weight": 0, 00:23:39.558 "high_priority_weight": 0, 00:23:39.558 "nvme_adminq_poll_period_us": 10000, 00:23:39.558 "nvme_ioq_poll_period_us": 0, 00:23:39.558 "io_queue_requests": 512, 00:23:39.558 "delay_cmd_submit": true, 00:23:39.558 "transport_retry_count": 4, 00:23:39.558 "bdev_retry_count": 3, 00:23:39.558 "transport_ack_timeout": 0, 00:23:39.558 "ctrlr_loss_timeout_sec": 0, 00:23:39.558 "reconnect_delay_sec": 0, 00:23:39.558 "fast_io_fail_timeout_sec": 0, 00:23:39.558 "disable_auto_failback": false, 00:23:39.558 "generate_uuids": false, 00:23:39.558 "transport_tos": 0, 00:23:39.558 "nvme_error_stat": false, 00:23:39.558 "rdma_srq_size": 0, 00:23:39.558 "io_path_stat": false, 00:23:39.558 "allow_accel_sequence": false, 00:23:39.558 "rdma_max_cq_size": 0, 00:23:39.558 "rdma_cm_event_timeout_ms": 0, 00:23:39.558 "dhchap_digests": [ 00:23:39.558 "sha256", 00:23:39.558 "sha384", 00:23:39.558 "sha512" 00:23:39.558 ], 00:23:39.558 "dhchap_dhgroups": [ 00:23:39.558 "null", 00:23:39.558 "ffdhe2048", 00:23:39.558 "ffdhe3072", 00:23:39.558 "ffdhe4096", 00:23:39.558 "ffdhe6144", 00:23:39.558 "ffdhe8192" 00:23:39.558 ] 00:23:39.558 } 00:23:39.558 }, 00:23:39.558 { 00:23:39.558 "method": "bdev_nvme_attach_controller", 00:23:39.558 "params": { 00:23:39.558 "name": "nvme0", 00:23:39.558 "trtype": "TCP", 00:23:39.558 "adrfam": "IPv4", 00:23:39.558 "traddr": "127.0.0.1", 00:23:39.558 "trsvcid": "4420", 00:23:39.558 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:39.558 "prchk_reftag": false, 00:23:39.558 "prchk_guard": false, 00:23:39.558 "ctrlr_loss_timeout_sec": 0, 00:23:39.558 "reconnect_delay_sec": 0, 00:23:39.558 "fast_io_fail_timeout_sec": 0, 00:23:39.558 "psk": "key0", 00:23:39.558 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:39.558 "hdgst": false, 00:23:39.558 "ddgst": false 00:23:39.558 } 00:23:39.558 }, 00:23:39.558 { 00:23:39.559 "method": "bdev_nvme_set_hotplug", 00:23:39.559 "params": { 00:23:39.559 "period_us": 100000, 00:23:39.559 "enable": false 00:23:39.559 } 00:23:39.559 }, 00:23:39.559 { 00:23:39.559 "method": "bdev_wait_for_examine" 00:23:39.559 } 00:23:39.559 ] 00:23:39.559 }, 00:23:39.559 { 00:23:39.559 "subsystem": "nbd", 00:23:39.559 "config": [] 00:23:39.559 } 00:23:39.559 ] 00:23:39.559 }' 00:23:39.559 13:52:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:39.559 13:52:42 -- common/autotest_common.sh@10 -- # set +x 00:23:39.559 [2024-04-18 13:52:42.183430] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 24.03.0 initialization... 00:23:39.559 [2024-04-18 13:52:42.183539] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2711810 ] 00:23:39.559 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.559 [2024-04-18 13:52:42.246829] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.559 [2024-04-18 13:52:42.360578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.818 [2024-04-18 13:52:42.548404] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.384 13:52:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:40.384 13:52:43 -- common/autotest_common.sh@850 -- # return 0 00:23:40.384 13:52:43 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:23:40.384 13:52:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:40.384 13:52:43 -- keyring/file.sh@120 -- # jq length 00:23:40.642 13:52:43 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:23:40.642 13:52:43 -- keyring/file.sh@121 -- # get_refcnt key0 00:23:40.642 13:52:43 -- keyring/common.sh@12 -- # get_key key0 00:23:40.642 13:52:43 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:40.642 13:52:43 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:40.642 13:52:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:40.642 13:52:43 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:40.900 13:52:43 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:23:40.900 13:52:43 -- keyring/file.sh@122 -- # get_refcnt key1 00:23:40.900 13:52:43 -- keyring/common.sh@12 -- # get_key key1 00:23:40.900 13:52:43 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:40.900 13:52:43 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:40.900 13:52:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:40.900 13:52:43 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:41.158 13:52:43 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:23:41.158 13:52:43 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:23:41.158 13:52:43 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:23:41.158 13:52:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:23:41.417 13:52:44 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:23:41.417 13:52:44 -- keyring/file.sh@1 -- # cleanup 00:23:41.417 13:52:44 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.WFmx1uLw15 /tmp/tmp.TUyuwjYN2S 00:23:41.417 13:52:44 -- keyring/file.sh@20 -- # killprocess 2711810 00:23:41.417 13:52:44 -- common/autotest_common.sh@936 -- # '[' -z 2711810 ']' 00:23:41.417 13:52:44 -- common/autotest_common.sh@940 -- # kill -0 2711810 00:23:41.417 13:52:44 -- common/autotest_common.sh@941 -- # uname 00:23:41.417 13:52:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:41.417 13:52:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2711810 00:23:41.417 13:52:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:41.417 13:52:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:41.417 13:52:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2711810' 00:23:41.417 killing process with pid 2711810 00:23:41.417 13:52:44 -- common/autotest_common.sh@955 -- # kill 2711810 00:23:41.417 Received shutdown signal, test time was about 1.000000 seconds 00:23:41.417 00:23:41.417 Latency(us) 00:23:41.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.417 =================================================================================================================== 00:23:41.417 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:41.417 13:52:44 -- common/autotest_common.sh@960 -- # wait 2711810 00:23:41.677 13:52:44 -- keyring/file.sh@21 -- # killprocess 2710352 00:23:41.677 13:52:44 -- common/autotest_common.sh@936 -- # '[' -z 2710352 ']' 00:23:41.677 13:52:44 -- common/autotest_common.sh@940 -- # kill -0 2710352 00:23:41.677 13:52:44 -- common/autotest_common.sh@941 -- # uname 00:23:41.677 13:52:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:41.677 13:52:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2710352 00:23:41.677 13:52:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:41.677 13:52:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:41.677 13:52:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2710352' 00:23:41.677 killing process with pid 2710352 00:23:41.677 13:52:44 -- common/autotest_common.sh@955 -- # kill 2710352 00:23:41.677 [2024-04-18 13:52:44.437984] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:41.677 13:52:44 -- common/autotest_common.sh@960 -- # wait 2710352 00:23:42.245 00:23:42.245 real 0m14.175s 00:23:42.245 user 0m34.886s 00:23:42.245 sys 0m3.362s 00:23:42.245 13:52:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:42.245 13:52:44 -- common/autotest_common.sh@10 -- # set +x 00:23:42.245 ************************************ 00:23:42.245 END TEST keyring_file 00:23:42.245 ************************************ 00:23:42.245 13:52:44 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:23:42.245 13:52:44 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:23:42.245 13:52:44 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:23:42.245 13:52:44 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:23:42.245 13:52:44 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:42.245 13:52:44 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:23:42.245 13:52:44 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:42.245 13:52:44 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:23:42.245 13:52:44 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:23:42.245 13:52:44 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:23:42.245 13:52:44 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:42.245 13:52:44 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:23:42.245 13:52:44 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:23:42.245 13:52:44 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:23:42.245 13:52:44 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:23:42.245 13:52:44 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:23:42.245 13:52:44 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:23:42.245 13:52:44 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:23:42.245 13:52:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:42.245 13:52:44 -- common/autotest_common.sh@10 -- # set +x 00:23:42.245 13:52:44 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:23:42.245 13:52:44 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:23:42.245 13:52:44 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:23:42.245 13:52:44 -- common/autotest_common.sh@10 -- # set +x 00:23:44.148 INFO: APP EXITING 00:23:44.148 INFO: killing all VMs 00:23:44.148 INFO: killing vhost app 00:23:44.148 INFO: EXIT DONE 00:23:45.084 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:23:45.084 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:23:45.084 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:23:45.084 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:23:45.084 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:23:45.084 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:23:45.084 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:23:45.084 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:23:45.084 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:23:45.084 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:23:45.084 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:23:45.084 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:23:45.084 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:23:45.084 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:23:45.084 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:23:45.084 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:23:45.084 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:23:46.460 Cleaning 00:23:46.460 Removing: /var/run/dpdk/spdk0/config 00:23:46.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:46.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:46.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:46.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:46.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:23:46.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:23:46.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:23:46.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:23:46.460 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:46.460 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:46.460 Removing: /var/run/dpdk/spdk1/config 00:23:46.460 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:46.460 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:46.460 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:46.460 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:46.460 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:23:46.460 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:23:46.460 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:23:46.460 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:23:46.460 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:46.460 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:46.460 Removing: /var/run/dpdk/spdk1/mp_socket 00:23:46.460 Removing: /var/run/dpdk/spdk2/config 00:23:46.460 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:46.460 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:46.460 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:46.460 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:46.460 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:23:46.460 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:23:46.460 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:23:46.460 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:23:46.460 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:46.460 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:46.460 Removing: /var/run/dpdk/spdk3/config 00:23:46.460 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:46.460 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:46.460 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:46.460 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:46.460 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:23:46.460 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:23:46.460 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:23:46.460 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:23:46.460 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:46.460 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:46.460 Removing: /var/run/dpdk/spdk4/config 00:23:46.460 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:46.460 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:46.460 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:46.460 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:46.460 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:23:46.460 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:23:46.460 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:23:46.460 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:23:46.460 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:46.460 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:46.460 Removing: /dev/shm/bdev_svc_trace.1 00:23:46.460 Removing: /dev/shm/nvmf_trace.0 00:23:46.460 Removing: /dev/shm/spdk_tgt_trace.pid2482048 00:23:46.460 Removing: /var/run/dpdk/spdk0 00:23:46.460 Removing: /var/run/dpdk/spdk1 00:23:46.460 Removing: /var/run/dpdk/spdk2 00:23:46.460 Removing: /var/run/dpdk/spdk3 00:23:46.460 Removing: /var/run/dpdk/spdk4 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2480332 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2481083 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2482048 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2482534 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2483233 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2483490 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2484226 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2484244 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2484502 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2485924 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2487360 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2487674 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2487875 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2488210 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2488422 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2488583 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2488870 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2489060 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2489546 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2491894 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2492195 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2492496 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2492501 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2492938 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2492943 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2493378 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2493514 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2493690 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2493820 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2493997 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2494137 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2494513 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2494796 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2495004 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2495204 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2495351 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2495559 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2495721 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2496003 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2496171 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2496338 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2496617 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2496786 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2497066 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2497233 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2497441 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2497677 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2497848 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2498130 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2498297 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2498575 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2498741 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2498913 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2499197 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2499362 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2499643 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2499814 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2500009 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2500357 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2502581 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2529216 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2531729 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2537617 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2540944 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2543358 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2543854 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2551021 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2551026 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2551684 00:23:46.460 Removing: /var/run/dpdk/spdk_pid2552220 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2552881 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2553278 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2553289 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2553540 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2553567 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2553679 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2554278 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2554993 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2556158 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2556564 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2556568 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2556817 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2557729 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2558460 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2563966 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2564129 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2566794 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2570518 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2572577 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2579007 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2584259 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2585452 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2586188 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2597165 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2599503 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2602314 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2603493 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2604812 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2604834 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2604974 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2605107 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2605556 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2606873 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2607609 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2608036 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2609776 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2610340 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2610912 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2613458 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2619387 00:23:46.461 Removing: /var/run/dpdk/spdk_pid2622648 00:23:46.718 Removing: /var/run/dpdk/spdk_pid2626449 00:23:46.718 Removing: /var/run/dpdk/spdk_pid2627408 00:23:46.718 Removing: /var/run/dpdk/spdk_pid2628598 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2631210 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2633606 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2637985 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2637987 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2640914 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2641050 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2641183 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2641452 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2641580 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2644237 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2644572 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2647184 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2649120 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2652564 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2655903 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2660487 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2660495 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2673278 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2673690 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2674211 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2674621 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2675213 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2675622 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2676032 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2676514 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2679087 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2679352 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2683174 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2683364 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2684978 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2690032 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2690037 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2692970 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2694479 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2696518 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2697273 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2698805 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2699680 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2705008 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2705278 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2705670 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2707238 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2707518 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2707919 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2710352 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2710384 00:23:46.719 Removing: /var/run/dpdk/spdk_pid2711810 00:23:46.719 Clean 00:23:46.719 13:52:49 -- common/autotest_common.sh@1437 -- # return 0 00:23:46.719 13:52:49 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:23:46.719 13:52:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:46.719 13:52:49 -- common/autotest_common.sh@10 -- # set +x 00:23:46.977 13:52:49 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:23:46.977 13:52:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:46.977 13:52:49 -- common/autotest_common.sh@10 -- # set +x 00:23:46.977 13:52:49 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:23:46.977 13:52:49 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:23:46.977 13:52:49 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:23:46.977 13:52:49 -- spdk/autotest.sh@389 -- # hash lcov 00:23:46.977 13:52:49 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:23:46.977 13:52:49 -- spdk/autotest.sh@391 -- # hostname 00:23:46.977 13:52:49 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:23:46.977 geninfo: WARNING: invalid characters removed from testname! 00:24:13.502 13:53:16 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:17.679 13:53:20 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:20.235 13:53:23 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:24.438 13:53:26 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:27.774 13:53:30 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:31.052 13:53:33 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:33.582 13:53:35 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:33.582 13:53:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:33.582 13:53:36 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:24:33.582 13:53:36 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.582 13:53:36 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.582 13:53:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.582 13:53:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.582 13:53:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.582 13:53:36 -- paths/export.sh@5 -- $ export PATH 00:24:33.582 13:53:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.582 13:53:36 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:24:33.582 13:53:36 -- common/autobuild_common.sh@435 -- $ date +%s 00:24:33.582 13:53:36 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713441216.XXXXXX 00:24:33.582 13:53:36 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713441216.LxDg2p 00:24:33.582 13:53:36 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:24:33.582 13:53:36 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:24:33.582 13:53:36 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:24:33.582 13:53:36 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:24:33.582 13:53:36 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:24:33.582 13:53:36 -- common/autobuild_common.sh@451 -- $ get_config_params 00:24:33.582 13:53:36 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:24:33.582 13:53:36 -- common/autotest_common.sh@10 -- $ set +x 00:24:33.582 13:53:36 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:24:33.582 13:53:36 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:24:33.582 13:53:36 -- pm/common@17 -- $ local monitor 00:24:33.582 13:53:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:33.582 13:53:36 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2720366 00:24:33.582 13:53:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:33.582 13:53:36 -- pm/common@21 -- $ date +%s 00:24:33.582 13:53:36 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2720368 00:24:33.582 13:53:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:33.582 13:53:36 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2720371 00:24:33.582 13:53:36 -- pm/common@21 -- $ date +%s 00:24:33.582 13:53:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:33.582 13:53:36 -- pm/common@21 -- $ date +%s 00:24:33.582 13:53:36 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2720374 00:24:33.582 13:53:36 -- pm/common@26 -- $ sleep 1 00:24:33.582 13:53:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713441216 00:24:33.582 13:53:36 -- pm/common@21 -- $ date +%s 00:24:33.582 13:53:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713441216 00:24:33.582 13:53:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713441216 00:24:33.582 13:53:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713441216 00:24:33.582 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713441216_collect-vmstat.pm.log 00:24:33.582 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713441216_collect-cpu-temp.pm.log 00:24:33.582 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713441216_collect-bmc-pm.bmc.pm.log 00:24:33.582 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713441216_collect-cpu-load.pm.log 00:24:34.522 13:53:37 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:24:34.522 13:53:37 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:24:34.522 13:53:37 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:34.522 13:53:37 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:24:34.522 13:53:37 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:24:34.522 13:53:37 -- spdk/autopackage.sh@19 -- $ timing_finish 00:24:34.522 13:53:37 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:34.522 13:53:37 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:24:34.522 13:53:37 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:24:34.522 13:53:37 -- spdk/autopackage.sh@20 -- $ exit 0 00:24:34.522 13:53:37 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:24:34.522 13:53:37 -- pm/common@30 -- $ signal_monitor_resources TERM 00:24:34.522 13:53:37 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:24:34.522 13:53:37 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:34.522 13:53:37 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:24:34.522 13:53:37 -- pm/common@45 -- $ pid=2720386 00:24:34.522 13:53:37 -- pm/common@52 -- $ sudo kill -TERM 2720386 00:24:34.522 13:53:37 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:34.522 13:53:37 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:24:34.522 13:53:37 -- pm/common@45 -- $ pid=2720387 00:24:34.522 13:53:37 -- pm/common@52 -- $ sudo kill -TERM 2720387 00:24:34.522 13:53:37 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:34.522 13:53:37 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:24:34.523 13:53:37 -- pm/common@45 -- $ pid=2720383 00:24:34.523 13:53:37 -- pm/common@52 -- $ sudo kill -TERM 2720383 00:24:34.523 13:53:37 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:34.523 13:53:37 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:24:34.523 13:53:37 -- pm/common@45 -- $ pid=2720390 00:24:34.523 13:53:37 -- pm/common@52 -- $ sudo kill -TERM 2720390 00:24:34.523 + [[ -n 2396888 ]] 00:24:34.523 + sudo kill 2396888 00:24:34.533 [Pipeline] } 00:24:34.549 [Pipeline] // stage 00:24:34.553 [Pipeline] } 00:24:34.569 [Pipeline] // timeout 00:24:34.573 [Pipeline] } 00:24:34.590 [Pipeline] // catchError 00:24:34.594 [Pipeline] } 00:24:34.609 [Pipeline] // wrap 00:24:34.614 [Pipeline] } 00:24:34.626 [Pipeline] // catchError 00:24:34.634 [Pipeline] stage 00:24:34.636 [Pipeline] { (Epilogue) 00:24:34.651 [Pipeline] catchError 00:24:34.652 [Pipeline] { 00:24:34.669 [Pipeline] echo 00:24:34.671 Cleanup processes 00:24:34.676 [Pipeline] sh 00:24:34.963 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:34.963 2720514 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:24:34.963 2720652 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:34.977 [Pipeline] sh 00:24:35.260 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:35.260 ++ grep -v 'sudo pgrep' 00:24:35.260 ++ awk '{print $1}' 00:24:35.260 + sudo kill -9 2720514 00:24:35.272 [Pipeline] sh 00:24:35.564 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:43.684 [Pipeline] sh 00:24:43.977 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:43.978 Artifacts sizes are good 00:24:44.003 [Pipeline] archiveArtifacts 00:24:44.009 Archiving artifacts 00:24:44.193 [Pipeline] sh 00:24:44.473 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:24:44.487 [Pipeline] cleanWs 00:24:44.496 [WS-CLEANUP] Deleting project workspace... 00:24:44.496 [WS-CLEANUP] Deferred wipeout is used... 00:24:44.503 [WS-CLEANUP] done 00:24:44.504 [Pipeline] } 00:24:44.524 [Pipeline] // catchError 00:24:44.535 [Pipeline] sh 00:24:44.815 + logger -p user.info -t JENKINS-CI 00:24:44.822 [Pipeline] } 00:24:44.837 [Pipeline] // stage 00:24:44.843 [Pipeline] } 00:24:44.857 [Pipeline] // node 00:24:44.862 [Pipeline] End of Pipeline 00:24:44.887 Finished: SUCCESS